Jan 24 02:39:11.022014 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 02:39:11.022049 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 02:39:11.022063 kernel: BIOS-provided physical RAM map: Jan 24 02:39:11.022079 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 02:39:11.022089 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 02:39:11.022099 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 02:39:11.022110 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 24 02:39:11.022121 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 24 02:39:11.022131 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 02:39:11.022142 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 02:39:11.022152 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 02:39:11.022163 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 02:39:11.022178 kernel: NX (Execute Disable) protection: active Jan 24 02:39:11.022189 kernel: APIC: Static calls initialized Jan 24 02:39:11.022201 kernel: SMBIOS 2.8 present. Jan 24 02:39:11.022213 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 24 02:39:11.022225 kernel: Hypervisor detected: KVM Jan 24 02:39:11.022241 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 02:39:11.022252 kernel: kvm-clock: using sched offset of 4504656374 cycles Jan 24 02:39:11.022265 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 02:39:11.022292 kernel: tsc: Detected 2499.998 MHz processor Jan 24 02:39:11.022304 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 02:39:11.022330 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 02:39:11.022344 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 24 02:39:11.022356 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 02:39:11.022368 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 02:39:11.022386 kernel: Using GB pages for direct mapping Jan 24 02:39:11.022398 kernel: ACPI: Early table checksum verification disabled Jan 24 02:39:11.022409 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 24 02:39:11.022421 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:39:11.022433 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:39:11.022444 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:39:11.022456 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 24 02:39:11.022467 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:39:11.022479 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:39:11.022495 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:39:11.022507 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:39:11.022518 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 24 02:39:11.022530 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 24 02:39:11.022542 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 24 02:39:11.022559 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 24 02:39:11.022588 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 24 02:39:11.022604 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 24 02:39:11.022615 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 24 02:39:11.022639 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 02:39:11.022650 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 02:39:11.022662 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 24 02:39:11.022673 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 24 02:39:11.022684 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 24 02:39:11.022699 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 24 02:39:11.022711 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 24 02:39:11.022722 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 24 02:39:11.022733 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 24 02:39:11.022744 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 24 02:39:11.022755 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 24 02:39:11.022766 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 24 02:39:11.022777 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 24 02:39:11.022788 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 24 02:39:11.022799 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 24 02:39:11.022821 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 24 02:39:11.022833 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 24 02:39:11.022844 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 24 02:39:11.022855 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 24 02:39:11.022867 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 24 02:39:11.022878 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 24 02:39:11.022890 kernel: Zone ranges: Jan 24 02:39:11.022901 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 02:39:11.022912 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 24 02:39:11.022928 kernel: Normal empty Jan 24 02:39:11.022939 kernel: Movable zone start for each node Jan 24 02:39:11.022950 kernel: Early memory node ranges Jan 24 02:39:11.022962 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 02:39:11.022985 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 24 02:39:11.022997 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 24 02:39:11.023008 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 02:39:11.023020 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 02:39:11.023044 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 24 02:39:11.023056 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 02:39:11.023073 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 02:39:11.023085 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 02:39:11.023097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 02:39:11.023109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 02:39:11.023121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 02:39:11.023133 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 02:39:11.023145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 02:39:11.023157 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 02:39:11.023170 kernel: TSC deadline timer available Jan 24 02:39:11.023186 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 24 02:39:11.023199 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 02:39:11.023211 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 02:39:11.023223 kernel: Booting paravirtualized kernel on KVM Jan 24 02:39:11.023235 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 02:39:11.023247 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 24 02:39:11.023260 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 24 02:39:11.023283 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 24 02:39:11.023295 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 24 02:39:11.023313 kernel: kvm-guest: PV spinlocks enabled Jan 24 02:39:11.023849 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 02:39:11.023864 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 02:39:11.023877 kernel: random: crng init done Jan 24 02:39:11.023889 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 02:39:11.023901 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 02:39:11.023919 kernel: Fallback order for Node 0: 0 Jan 24 02:39:11.023931 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 24 02:39:11.023951 kernel: Policy zone: DMA32 Jan 24 02:39:11.025360 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 02:39:11.025376 kernel: software IO TLB: area num 16. Jan 24 02:39:11.025389 kernel: Memory: 1901592K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194764K reserved, 0K cma-reserved) Jan 24 02:39:11.025401 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 24 02:39:11.025414 kernel: Kernel/User page tables isolation: enabled Jan 24 02:39:11.025426 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 02:39:11.025438 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 02:39:11.025450 kernel: Dynamic Preempt: voluntary Jan 24 02:39:11.025470 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 02:39:11.025483 kernel: rcu: RCU event tracing is enabled. Jan 24 02:39:11.025496 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 24 02:39:11.025508 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 02:39:11.025521 kernel: Rude variant of Tasks RCU enabled. Jan 24 02:39:11.025548 kernel: Tracing variant of Tasks RCU enabled. Jan 24 02:39:11.025561 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 02:39:11.025574 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 24 02:39:11.025586 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 24 02:39:11.025599 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 02:39:11.025612 kernel: Console: colour VGA+ 80x25 Jan 24 02:39:11.025624 kernel: printk: console [tty0] enabled Jan 24 02:39:11.025642 kernel: printk: console [ttyS0] enabled Jan 24 02:39:11.025655 kernel: ACPI: Core revision 20230628 Jan 24 02:39:11.025668 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 02:39:11.025681 kernel: x2apic enabled Jan 24 02:39:11.025694 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 02:39:11.025711 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 24 02:39:11.025737 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 24 02:39:11.025749 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 02:39:11.025762 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 24 02:39:11.025774 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 24 02:39:11.025786 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 02:39:11.025798 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 02:39:11.025810 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 02:39:11.025822 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 24 02:39:11.025835 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 02:39:11.025852 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 02:39:11.025864 kernel: MDS: Mitigation: Clear CPU buffers Jan 24 02:39:11.025876 kernel: MMIO Stale Data: Unknown: No mitigations Jan 24 02:39:11.025888 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 24 02:39:11.025900 kernel: active return thunk: its_return_thunk Jan 24 02:39:11.025912 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 02:39:11.025924 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 02:39:11.025937 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 02:39:11.025961 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 02:39:11.025974 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 02:39:11.025986 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 24 02:39:11.026006 kernel: Freeing SMP alternatives memory: 32K Jan 24 02:39:11.026019 kernel: pid_max: default: 32768 minimum: 301 Jan 24 02:39:11.026032 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 02:39:11.026044 kernel: landlock: Up and running. Jan 24 02:39:11.026057 kernel: SELinux: Initializing. Jan 24 02:39:11.026069 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 02:39:11.026082 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 02:39:11.026095 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 24 02:39:11.026108 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 02:39:11.026121 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 02:39:11.026138 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 02:39:11.026152 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 24 02:39:11.026164 kernel: signal: max sigframe size: 1776 Jan 24 02:39:11.026177 kernel: rcu: Hierarchical SRCU implementation. Jan 24 02:39:11.026190 kernel: rcu: Max phase no-delay instances is 400. Jan 24 02:39:11.026203 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 02:39:11.026216 kernel: smp: Bringing up secondary CPUs ... Jan 24 02:39:11.026228 kernel: smpboot: x86: Booting SMP configuration: Jan 24 02:39:11.026241 kernel: .... node #0, CPUs: #1 Jan 24 02:39:11.026258 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 24 02:39:11.026283 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 02:39:11.026298 kernel: smpboot: Max logical packages: 16 Jan 24 02:39:11.026310 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 24 02:39:11.026338 kernel: devtmpfs: initialized Jan 24 02:39:11.027385 kernel: x86/mm: Memory block size: 128MB Jan 24 02:39:11.027402 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 02:39:11.027416 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 24 02:39:11.027428 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 02:39:11.027449 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 02:39:11.027462 kernel: audit: initializing netlink subsys (disabled) Jan 24 02:39:11.027475 kernel: audit: type=2000 audit(1769222349.352:1): state=initialized audit_enabled=0 res=1 Jan 24 02:39:11.027488 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 02:39:11.027501 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 02:39:11.027513 kernel: cpuidle: using governor menu Jan 24 02:39:11.027526 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 02:39:11.027539 kernel: dca service started, version 1.12.1 Jan 24 02:39:11.027552 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 02:39:11.027570 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 02:39:11.027583 kernel: PCI: Using configuration type 1 for base access Jan 24 02:39:11.027596 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 02:39:11.027609 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 02:39:11.027621 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 02:39:11.027634 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 02:39:11.027647 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 02:39:11.027659 kernel: ACPI: Added _OSI(Module Device) Jan 24 02:39:11.027672 kernel: ACPI: Added _OSI(Processor Device) Jan 24 02:39:11.027690 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 02:39:11.027703 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 02:39:11.027716 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 02:39:11.027728 kernel: ACPI: Interpreter enabled Jan 24 02:39:11.027741 kernel: ACPI: PM: (supports S0 S5) Jan 24 02:39:11.027754 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 02:39:11.027767 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 02:39:11.027780 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 02:39:11.027793 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 02:39:11.027810 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 02:39:11.028071 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 02:39:11.028250 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 24 02:39:11.029498 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 24 02:39:11.029521 kernel: PCI host bridge to bus 0000:00 Jan 24 02:39:11.029698 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 02:39:11.029863 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 02:39:11.030028 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 02:39:11.030175 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 24 02:39:11.030357 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 02:39:11.030506 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 24 02:39:11.030662 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 02:39:11.030862 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 02:39:11.031052 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 24 02:39:11.031237 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 24 02:39:11.032796 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 24 02:39:11.032968 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 24 02:39:11.033150 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 02:39:11.033384 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 24 02:39:11.033557 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 24 02:39:11.033769 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 24 02:39:11.033958 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 24 02:39:11.034153 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 24 02:39:11.034362 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 24 02:39:11.034549 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 24 02:39:11.034716 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 24 02:39:11.034899 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 24 02:39:11.035538 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 24 02:39:11.035728 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 24 02:39:11.035891 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 24 02:39:11.036062 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 24 02:39:11.036223 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 24 02:39:11.038521 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 24 02:39:11.038699 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 24 02:39:11.038878 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 24 02:39:11.039045 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 02:39:11.039213 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 24 02:39:11.039408 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 24 02:39:11.039573 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 24 02:39:11.039755 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 24 02:39:11.039920 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 02:39:11.040082 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 24 02:39:11.040277 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 24 02:39:11.041533 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 02:39:11.041707 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 02:39:11.041885 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 02:39:11.042060 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 24 02:39:11.042223 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 24 02:39:11.042426 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 02:39:11.042591 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 02:39:11.042768 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 24 02:39:11.042936 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 24 02:39:11.043109 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 02:39:11.043280 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 24 02:39:11.046486 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 02:39:11.046668 kernel: pci_bus 0000:02: extended config space not accessible Jan 24 02:39:11.046854 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 24 02:39:11.047031 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 24 02:39:11.047224 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 02:39:11.047440 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 24 02:39:11.047619 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 24 02:39:11.047787 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 24 02:39:11.047953 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 02:39:11.048115 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 24 02:39:11.048293 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 02:39:11.050519 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 24 02:39:11.050742 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 24 02:39:11.050910 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 02:39:11.051075 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 24 02:39:11.051238 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 02:39:11.051437 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 02:39:11.051603 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 24 02:39:11.051766 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 02:39:11.051940 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 02:39:11.052106 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 24 02:39:11.052304 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 02:39:11.053564 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 02:39:11.053731 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 24 02:39:11.053902 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 02:39:11.054076 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 02:39:11.054257 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 24 02:39:11.055502 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 02:39:11.055670 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 02:39:11.055861 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 24 02:39:11.056487 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 02:39:11.056513 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 02:39:11.056527 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 02:39:11.056540 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 02:39:11.056566 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 02:39:11.056585 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 02:39:11.056636 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 02:39:11.056649 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 02:39:11.056662 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 02:39:11.056675 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 02:39:11.056688 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 02:39:11.056701 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 02:39:11.056714 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 02:39:11.056727 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 02:39:11.056740 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 02:39:11.056758 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 02:39:11.056771 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 02:39:11.056784 kernel: iommu: Default domain type: Translated Jan 24 02:39:11.056797 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 02:39:11.056810 kernel: PCI: Using ACPI for IRQ routing Jan 24 02:39:11.056823 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 02:39:11.056835 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 02:39:11.056848 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 24 02:39:11.057032 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 02:39:11.057228 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 02:39:11.059470 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 02:39:11.059491 kernel: vgaarb: loaded Jan 24 02:39:11.059505 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 02:39:11.059518 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 02:39:11.059531 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 02:39:11.059543 kernel: pnp: PnP ACPI init Jan 24 02:39:11.059757 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 02:39:11.059799 kernel: pnp: PnP ACPI: found 5 devices Jan 24 02:39:11.059813 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 02:39:11.059826 kernel: NET: Registered PF_INET protocol family Jan 24 02:39:11.059839 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 02:39:11.059852 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 24 02:39:11.059866 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 02:39:11.059878 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 02:39:11.059891 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 24 02:39:11.059909 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 24 02:39:11.059922 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 02:39:11.059935 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 02:39:11.059948 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 02:39:11.059961 kernel: NET: Registered PF_XDP protocol family Jan 24 02:39:11.060133 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 24 02:39:11.060397 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 24 02:39:11.060580 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 24 02:39:11.060741 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 24 02:39:11.060927 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 24 02:39:11.061089 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 24 02:39:11.061260 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 24 02:39:11.061472 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 24 02:39:11.061656 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 24 02:39:11.061849 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 24 02:39:11.062021 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 24 02:39:11.062192 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 24 02:39:11.064400 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 24 02:39:11.064591 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 24 02:39:11.064764 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 24 02:39:11.064937 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 24 02:39:11.065122 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 02:39:11.065355 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 24 02:39:11.065522 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 02:39:11.065706 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 24 02:39:11.065879 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 24 02:39:11.066060 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 02:39:11.066222 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 02:39:11.068443 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 24 02:39:11.068608 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 24 02:39:11.068777 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 02:39:11.068953 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 02:39:11.069114 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 24 02:39:11.069288 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 24 02:39:11.071497 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 02:39:11.071665 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 02:39:11.071847 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 24 02:39:11.072017 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 24 02:39:11.072179 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 02:39:11.072396 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 02:39:11.072561 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 24 02:39:11.072730 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 24 02:39:11.072885 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 02:39:11.073056 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 02:39:11.073218 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 24 02:39:11.073423 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 24 02:39:11.073598 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 02:39:11.073770 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 02:39:11.073933 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 24 02:39:11.074096 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 24 02:39:11.074284 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 02:39:11.076488 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 02:39:11.076658 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 24 02:39:11.076820 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 24 02:39:11.076982 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 02:39:11.077132 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 02:39:11.077291 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 02:39:11.079475 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 02:39:11.079624 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 24 02:39:11.079815 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 02:39:11.079981 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 24 02:39:11.080149 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 24 02:39:11.080354 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 24 02:39:11.080516 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 02:39:11.080678 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 24 02:39:11.080840 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 24 02:39:11.081025 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 24 02:39:11.081175 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 02:39:11.081365 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 24 02:39:11.081522 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 24 02:39:11.081675 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 02:39:11.081847 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 24 02:39:11.082012 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 24 02:39:11.082166 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 02:39:11.084391 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 24 02:39:11.084554 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 24 02:39:11.084710 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 02:39:11.084888 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 24 02:39:11.085056 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 24 02:39:11.085220 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 02:39:11.087453 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 24 02:39:11.087611 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 24 02:39:11.087763 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 02:39:11.087927 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 24 02:39:11.088079 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 24 02:39:11.088232 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 02:39:11.088261 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 02:39:11.088287 kernel: PCI: CLS 0 bytes, default 64 Jan 24 02:39:11.088301 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 02:39:11.089341 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 24 02:39:11.089364 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 02:39:11.089378 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 24 02:39:11.089392 kernel: Initialise system trusted keyrings Jan 24 02:39:11.089406 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 24 02:39:11.089428 kernel: Key type asymmetric registered Jan 24 02:39:11.089442 kernel: Asymmetric key parser 'x509' registered Jan 24 02:39:11.089455 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 02:39:11.089468 kernel: io scheduler mq-deadline registered Jan 24 02:39:11.089482 kernel: io scheduler kyber registered Jan 24 02:39:11.089495 kernel: io scheduler bfq registered Jan 24 02:39:11.089666 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 24 02:39:11.089832 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 24 02:39:11.089995 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:39:11.090167 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 24 02:39:11.090371 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 24 02:39:11.090538 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:39:11.090702 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 24 02:39:11.090863 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 24 02:39:11.091023 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:39:11.091196 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 24 02:39:11.091404 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 24 02:39:11.091569 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:39:11.091733 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 24 02:39:11.091897 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 24 02:39:11.092083 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:39:11.092256 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 24 02:39:11.092470 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 24 02:39:11.092641 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:39:11.092803 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 24 02:39:11.092972 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 24 02:39:11.093129 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:39:11.093347 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 24 02:39:11.093517 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 24 02:39:11.093683 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:39:11.093704 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 02:39:11.093719 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 02:39:11.093733 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 02:39:11.093754 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 02:39:11.093768 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 02:39:11.093782 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 02:39:11.093796 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 02:39:11.093821 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 02:39:11.093835 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 02:39:11.093998 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 02:39:11.094151 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 02:39:11.094392 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T02:39:10 UTC (1769222350) Jan 24 02:39:11.094549 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 24 02:39:11.094569 kernel: intel_pstate: CPU model not supported Jan 24 02:39:11.094583 kernel: NET: Registered PF_INET6 protocol family Jan 24 02:39:11.094596 kernel: Segment Routing with IPv6 Jan 24 02:39:11.094610 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 02:39:11.094623 kernel: NET: Registered PF_PACKET protocol family Jan 24 02:39:11.094637 kernel: Key type dns_resolver registered Jan 24 02:39:11.094650 kernel: IPI shorthand broadcast: enabled Jan 24 02:39:11.094672 kernel: sched_clock: Marking stable (1294003789, 229909593)->(1652774654, -128861272) Jan 24 02:39:11.094686 kernel: registered taskstats version 1 Jan 24 02:39:11.094700 kernel: Loading compiled-in X.509 certificates Jan 24 02:39:11.094713 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 02:39:11.094727 kernel: Key type .fscrypt registered Jan 24 02:39:11.094740 kernel: Key type fscrypt-provisioning registered Jan 24 02:39:11.094754 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 02:39:11.094767 kernel: ima: Allocated hash algorithm: sha1 Jan 24 02:39:11.094780 kernel: ima: No architecture policies found Jan 24 02:39:11.094799 kernel: clk: Disabling unused clocks Jan 24 02:39:11.094813 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 02:39:11.094827 kernel: Write protecting the kernel read-only data: 36864k Jan 24 02:39:11.094853 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 02:39:11.094866 kernel: Run /init as init process Jan 24 02:39:11.094878 kernel: with arguments: Jan 24 02:39:11.094891 kernel: /init Jan 24 02:39:11.094904 kernel: with environment: Jan 24 02:39:11.094930 kernel: HOME=/ Jan 24 02:39:11.094943 kernel: TERM=linux Jan 24 02:39:11.094965 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 02:39:11.094981 systemd[1]: Detected virtualization kvm. Jan 24 02:39:11.094995 systemd[1]: Detected architecture x86-64. Jan 24 02:39:11.095009 systemd[1]: Running in initrd. Jan 24 02:39:11.095023 systemd[1]: No hostname configured, using default hostname. Jan 24 02:39:11.095037 systemd[1]: Hostname set to . Jan 24 02:39:11.095052 systemd[1]: Initializing machine ID from VM UUID. Jan 24 02:39:11.095071 systemd[1]: Queued start job for default target initrd.target. Jan 24 02:39:11.095085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 02:39:11.095099 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 02:39:11.095114 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 02:39:11.095129 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 02:39:11.095143 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 02:39:11.095163 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 02:39:11.095184 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 02:39:11.095198 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 02:39:11.095213 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 02:39:11.095228 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 02:39:11.095242 systemd[1]: Reached target paths.target - Path Units. Jan 24 02:39:11.095256 systemd[1]: Reached target slices.target - Slice Units. Jan 24 02:39:11.095282 systemd[1]: Reached target swap.target - Swaps. Jan 24 02:39:11.095297 systemd[1]: Reached target timers.target - Timer Units. Jan 24 02:39:11.095340 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 02:39:11.095357 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 02:39:11.095372 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 02:39:11.095386 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 02:39:11.095401 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 02:39:11.095415 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 02:39:11.095430 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 02:39:11.095444 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 02:39:11.095458 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 02:39:11.095480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 02:39:11.095494 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 02:39:11.095508 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 02:39:11.095523 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 02:39:11.095537 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 02:39:11.095600 systemd-journald[204]: Collecting audit messages is disabled. Jan 24 02:39:11.095640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 02:39:11.095655 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 02:39:11.095670 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 02:39:11.095684 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 02:39:11.095705 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 02:39:11.095720 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 02:39:11.095734 kernel: Bridge firewalling registered Jan 24 02:39:11.095748 systemd-journald[204]: Journal started Jan 24 02:39:11.095779 systemd-journald[204]: Runtime Journal (/run/log/journal/eb3905c88dbd4da49f815f60c09a2d2b) is 4.7M, max 38.0M, 33.2M free. Jan 24 02:39:11.056665 systemd-modules-load[205]: Inserted module 'overlay' Jan 24 02:39:11.089284 systemd-modules-load[205]: Inserted module 'br_netfilter' Jan 24 02:39:11.153337 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 02:39:11.154551 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 02:39:11.155577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 02:39:11.165561 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 02:39:11.168334 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 02:39:11.180528 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 02:39:11.184367 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 02:39:11.194705 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 02:39:11.198828 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 02:39:11.207534 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 02:39:11.214494 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 02:39:11.217373 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 02:39:11.219389 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 02:39:11.226499 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 02:39:11.243377 dracut-cmdline[239]: dracut-dracut-053 Jan 24 02:39:11.247779 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 02:39:11.260341 systemd-resolved[237]: Positive Trust Anchors: Jan 24 02:39:11.260361 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 02:39:11.260404 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 02:39:11.269421 systemd-resolved[237]: Defaulting to hostname 'linux'. Jan 24 02:39:11.271001 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 02:39:11.272796 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 02:39:11.354375 kernel: SCSI subsystem initialized Jan 24 02:39:11.366336 kernel: Loading iSCSI transport class v2.0-870. Jan 24 02:39:11.381344 kernel: iscsi: registered transport (tcp) Jan 24 02:39:11.408792 kernel: iscsi: registered transport (qla4xxx) Jan 24 02:39:11.408833 kernel: QLogic iSCSI HBA Driver Jan 24 02:39:11.471453 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 02:39:11.478535 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 02:39:11.512396 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 02:39:11.512490 kernel: device-mapper: uevent: version 1.0.3 Jan 24 02:39:11.514631 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 02:39:11.564365 kernel: raid6: sse2x4 gen() 13252 MB/s Jan 24 02:39:11.582359 kernel: raid6: sse2x2 gen() 9208 MB/s Jan 24 02:39:11.601044 kernel: raid6: sse2x1 gen() 10097 MB/s Jan 24 02:39:11.601091 kernel: raid6: using algorithm sse2x4 gen() 13252 MB/s Jan 24 02:39:11.620012 kernel: raid6: .... xor() 7735 MB/s, rmw enabled Jan 24 02:39:11.620075 kernel: raid6: using ssse3x2 recovery algorithm Jan 24 02:39:11.647387 kernel: xor: automatically using best checksumming function avx Jan 24 02:39:11.847430 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 02:39:11.863526 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 02:39:11.871575 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 02:39:11.901137 systemd-udevd[423]: Using default interface naming scheme 'v255'. Jan 24 02:39:11.908686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 02:39:11.919501 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 02:39:11.940356 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Jan 24 02:39:11.980086 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 02:39:11.985517 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 02:39:12.106794 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 02:39:12.116534 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 02:39:12.141590 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 02:39:12.146151 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 02:39:12.148131 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 02:39:12.150238 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 02:39:12.156504 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 02:39:12.188182 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 02:39:12.230345 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 24 02:39:12.247378 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 02:39:12.247432 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 24 02:39:12.277801 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 02:39:12.277860 kernel: GPT:17805311 != 125829119 Jan 24 02:39:12.277880 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 02:39:12.279602 kernel: GPT:17805311 != 125829119 Jan 24 02:39:12.279644 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 02:39:12.279940 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 02:39:12.287209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 02:39:12.280196 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 02:39:12.289285 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 02:39:12.290062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 02:39:12.290260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 02:39:12.291081 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 02:39:12.300426 kernel: libata version 3.00 loaded. Jan 24 02:39:12.304345 kernel: AVX version of gcm_enc/dec engaged. Jan 24 02:39:12.302742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 02:39:12.312826 kernel: AES CTR mode by8 optimization enabled Jan 24 02:39:12.332379 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (476) Jan 24 02:39:12.337782 kernel: ACPI: bus type USB registered Jan 24 02:39:12.337828 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 02:39:12.345348 kernel: usbcore: registered new interface driver usbfs Jan 24 02:39:12.348346 kernel: usbcore: registered new interface driver hub Jan 24 02:39:12.348381 kernel: usbcore: registered new device driver usb Jan 24 02:39:12.349953 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 02:39:12.352012 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 02:39:12.375360 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (481) Jan 24 02:39:12.387412 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 02:39:12.387734 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 02:39:12.399993 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 02:39:12.503331 kernel: scsi host0: ahci Jan 24 02:39:12.503669 kernel: scsi host1: ahci Jan 24 02:39:12.503877 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 24 02:39:12.504141 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 24 02:39:12.504410 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 24 02:39:12.504617 kernel: scsi host2: ahci Jan 24 02:39:12.504812 kernel: scsi host3: ahci Jan 24 02:39:12.505017 kernel: scsi host4: ahci Jan 24 02:39:12.505249 kernel: scsi host5: ahci Jan 24 02:39:12.505474 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 24 02:39:12.505496 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 24 02:39:12.505514 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 24 02:39:12.505532 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 24 02:39:12.505549 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 24 02:39:12.505568 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 24 02:39:12.505586 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 24 02:39:12.505794 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 24 02:39:12.505997 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 24 02:39:12.506215 kernel: hub 1-0:1.0: USB hub found Jan 24 02:39:12.506475 kernel: hub 1-0:1.0: 4 ports detected Jan 24 02:39:12.506676 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 24 02:39:12.506959 kernel: hub 2-0:1.0: USB hub found Jan 24 02:39:12.507194 kernel: hub 2-0:1.0: 4 ports detected Jan 24 02:39:12.504044 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 02:39:12.505209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 02:39:12.513310 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 02:39:12.531228 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 02:39:12.540589 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 02:39:12.545213 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 02:39:12.548795 disk-uuid[566]: Primary Header is updated. Jan 24 02:39:12.548795 disk-uuid[566]: Secondary Entries is updated. Jan 24 02:39:12.548795 disk-uuid[566]: Secondary Header is updated. Jan 24 02:39:12.557665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 02:39:12.562347 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 02:39:12.569342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 02:39:12.593389 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 02:39:12.660356 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 24 02:39:12.726385 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 02:39:12.726459 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 02:39:12.728802 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 02:39:12.730442 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 02:39:12.733760 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 02:39:12.733794 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 02:39:12.808349 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 02:39:12.814576 kernel: usbcore: registered new interface driver usbhid Jan 24 02:39:12.814624 kernel: usbhid: USB HID core driver Jan 24 02:39:12.822452 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 24 02:39:12.822493 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 24 02:39:13.570373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 02:39:13.571556 disk-uuid[567]: The operation has completed successfully. Jan 24 02:39:13.625611 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 02:39:13.626540 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 02:39:13.645521 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 02:39:13.660483 sh[589]: Success Jan 24 02:39:13.677448 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 24 02:39:13.742695 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 02:39:13.760455 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 02:39:13.762233 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 02:39:13.790708 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 02:39:13.790761 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 02:39:13.792831 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 02:39:13.796352 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 02:39:13.796392 kernel: BTRFS info (device dm-0): using free space tree Jan 24 02:39:13.807495 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 02:39:13.808965 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 02:39:13.815504 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 02:39:13.818679 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 02:39:13.839732 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 02:39:13.839776 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 02:39:13.839796 kernel: BTRFS info (device vda6): using free space tree Jan 24 02:39:13.845334 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 02:39:13.856762 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 02:39:13.860486 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 02:39:13.865485 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 02:39:13.873541 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 02:39:14.020722 ignition[675]: Ignition 2.19.0 Jan 24 02:39:14.020752 ignition[675]: Stage: fetch-offline Jan 24 02:39:14.024214 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 02:39:14.020865 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jan 24 02:39:14.027525 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 02:39:14.020898 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:39:14.021114 ignition[675]: parsed url from cmdline: "" Jan 24 02:39:14.021122 ignition[675]: no config URL provided Jan 24 02:39:14.021142 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 02:39:14.021158 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jan 24 02:39:14.021168 ignition[675]: failed to fetch config: resource requires networking Jan 24 02:39:14.023903 ignition[675]: Ignition finished successfully Jan 24 02:39:14.035672 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 02:39:14.084977 systemd-networkd[777]: lo: Link UP Jan 24 02:39:14.085006 systemd-networkd[777]: lo: Gained carrier Jan 24 02:39:14.088167 systemd-networkd[777]: Enumeration completed Jan 24 02:39:14.089212 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 02:39:14.089220 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 02:39:14.090983 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 02:39:14.091614 systemd-networkd[777]: eth0: Link UP Jan 24 02:39:14.091620 systemd-networkd[777]: eth0: Gained carrier Jan 24 02:39:14.091632 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 02:39:14.093330 systemd[1]: Reached target network.target - Network. Jan 24 02:39:14.102623 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 02:39:14.112464 systemd-networkd[777]: eth0: DHCPv4 address 10.230.33.130/30, gateway 10.230.33.129 acquired from 10.230.33.129 Jan 24 02:39:14.120935 ignition[779]: Ignition 2.19.0 Jan 24 02:39:14.120956 ignition[779]: Stage: fetch Jan 24 02:39:14.121270 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 24 02:39:14.121290 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:39:14.121484 ignition[779]: parsed url from cmdline: "" Jan 24 02:39:14.121501 ignition[779]: no config URL provided Jan 24 02:39:14.121522 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 02:39:14.121539 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jan 24 02:39:14.121757 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 24 02:39:14.121808 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 24 02:39:14.121840 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 24 02:39:14.137087 ignition[779]: GET result: OK Jan 24 02:39:14.138050 ignition[779]: parsing config with SHA512: 419f1981ee678184f8ab91d673255483d9ec34d16d4784b23ee145592a1f5dc8e7fbe209200dee6a3dcc1d0de743769a2e79d93e143d7e6d79465a784a558cc7 Jan 24 02:39:14.144110 unknown[779]: fetched base config from "system" Jan 24 02:39:14.144378 unknown[779]: fetched base config from "system" Jan 24 02:39:14.144390 unknown[779]: fetched user config from "openstack" Jan 24 02:39:14.144971 ignition[779]: fetch: fetch complete Jan 24 02:39:14.147843 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 02:39:14.144981 ignition[779]: fetch: fetch passed Jan 24 02:39:14.145049 ignition[779]: Ignition finished successfully Jan 24 02:39:14.155525 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 02:39:14.180773 ignition[787]: Ignition 2.19.0 Jan 24 02:39:14.180792 ignition[787]: Stage: kargs Jan 24 02:39:14.181181 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 24 02:39:14.181216 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:39:14.184219 ignition[787]: kargs: kargs passed Jan 24 02:39:14.184292 ignition[787]: Ignition finished successfully Jan 24 02:39:14.187614 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 02:39:14.204523 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 02:39:14.222452 ignition[793]: Ignition 2.19.0 Jan 24 02:39:14.223770 ignition[793]: Stage: disks Jan 24 02:39:14.224647 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 24 02:39:14.224670 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:39:14.225985 ignition[793]: disks: disks passed Jan 24 02:39:14.228051 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 02:39:14.226075 ignition[793]: Ignition finished successfully Jan 24 02:39:14.229513 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 02:39:14.231141 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 02:39:14.232969 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 02:39:14.234648 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 02:39:14.236350 systemd[1]: Reached target basic.target - Basic System. Jan 24 02:39:14.250550 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 02:39:14.271377 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 02:39:14.275631 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 02:39:14.281457 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 02:39:14.405358 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 02:39:14.405823 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 02:39:14.407301 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 02:39:14.414440 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 02:39:14.418450 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 02:39:14.419856 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 02:39:14.422013 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 24 02:39:14.424053 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 02:39:14.425519 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 02:39:14.438510 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Jan 24 02:39:14.438555 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 02:39:14.442150 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 02:39:14.442201 kernel: BTRFS info (device vda6): using free space tree Jan 24 02:39:14.442292 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 02:39:14.448342 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 02:39:14.460172 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 02:39:14.463897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 02:39:14.530677 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 02:39:14.544443 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 24 02:39:14.555822 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 02:39:14.563862 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 02:39:14.674306 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 02:39:14.688516 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 02:39:14.692491 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 02:39:14.703351 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 02:39:14.736659 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 02:39:14.741695 ignition[926]: INFO : Ignition 2.19.0 Jan 24 02:39:14.744410 ignition[926]: INFO : Stage: mount Jan 24 02:39:14.744410 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 02:39:14.744410 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:39:14.744410 ignition[926]: INFO : mount: mount passed Jan 24 02:39:14.744410 ignition[926]: INFO : Ignition finished successfully Jan 24 02:39:14.747018 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 02:39:14.788439 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 02:39:15.150599 systemd-networkd[777]: eth0: Gained IPv6LL Jan 24 02:39:16.657546 systemd-networkd[777]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8860:24:19ff:fee6:2182/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8860:24:19ff:fee6:2182/64 assigned by NDisc. Jan 24 02:39:16.657560 systemd-networkd[777]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 24 02:39:21.598162 coreos-metadata[811]: Jan 24 02:39:21.598 WARN failed to locate config-drive, using the metadata service API instead Jan 24 02:39:21.621902 coreos-metadata[811]: Jan 24 02:39:21.621 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 24 02:39:21.645624 coreos-metadata[811]: Jan 24 02:39:21.645 INFO Fetch successful Jan 24 02:39:21.646754 coreos-metadata[811]: Jan 24 02:39:21.646 INFO wrote hostname srv-aqhf7.gb1.brightbox.com to /sysroot/etc/hostname Jan 24 02:39:21.649526 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 24 02:39:21.649705 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 24 02:39:21.656458 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 02:39:21.679528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 02:39:21.705380 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Jan 24 02:39:21.711675 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 02:39:21.711711 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 02:39:21.711732 kernel: BTRFS info (device vda6): using free space tree Jan 24 02:39:21.716340 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 02:39:21.719574 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 02:39:21.752412 ignition[961]: INFO : Ignition 2.19.0 Jan 24 02:39:21.752412 ignition[961]: INFO : Stage: files Jan 24 02:39:21.754233 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 02:39:21.754233 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:39:21.754233 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 24 02:39:21.757068 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 02:39:21.757068 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 02:39:21.759237 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 02:39:21.759237 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 02:39:21.761603 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 02:39:21.760677 unknown[961]: wrote ssh authorized keys file for user: core Jan 24 02:39:21.763909 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 02:39:21.763909 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 02:39:21.990408 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 02:39:22.305509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 02:39:22.305509 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 02:39:22.308141 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 02:39:22.308141 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 02:39:22.308141 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 02:39:22.308141 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 02:39:22.318460 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 02:39:22.318460 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 02:39:22.318460 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 02:39:22.318460 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 02:39:22.318460 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 02:39:22.318460 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 02:39:22.318460 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 02:39:22.318460 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 02:39:22.318460 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 24 02:39:22.995385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 02:39:25.195371 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 02:39:25.195371 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 02:39:25.198539 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 02:39:25.198539 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 02:39:25.198539 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 02:39:25.198539 ignition[961]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 02:39:25.198539 ignition[961]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 02:39:25.198539 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 02:39:25.198539 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 02:39:25.198539 ignition[961]: INFO : files: files passed Jan 24 02:39:25.209203 ignition[961]: INFO : Ignition finished successfully Jan 24 02:39:25.201021 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 02:39:25.210558 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 02:39:25.215943 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 02:39:25.218734 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 02:39:25.218935 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 02:39:25.237664 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 02:39:25.240368 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 02:39:25.241693 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 02:39:25.243122 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 02:39:25.245506 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 02:39:25.262616 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 02:39:25.292131 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 02:39:25.292297 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 02:39:25.293801 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 02:39:25.294913 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 02:39:25.296855 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 02:39:25.311986 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 02:39:25.328657 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 02:39:25.334529 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 02:39:25.357372 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 02:39:25.358297 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 02:39:25.359237 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 02:39:25.360038 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 02:39:25.360202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 02:39:25.362483 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 02:39:25.363456 systemd[1]: Stopped target basic.target - Basic System. Jan 24 02:39:25.364794 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 02:39:25.366384 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 02:39:25.368025 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 02:39:25.369476 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 02:39:25.370875 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 02:39:25.372639 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 02:39:25.374178 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 02:39:25.375770 systemd[1]: Stopped target swap.target - Swaps. Jan 24 02:39:25.377214 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 02:39:25.377410 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 02:39:25.379541 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 02:39:25.380506 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 02:39:25.382004 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 02:39:25.382179 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 02:39:25.383527 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 02:39:25.383794 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 02:39:25.385552 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 02:39:25.385734 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 02:39:25.387538 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 02:39:25.387690 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 02:39:25.396192 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 02:39:25.398019 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 02:39:25.399805 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 02:39:25.400002 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 02:39:25.403946 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 02:39:25.404124 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 02:39:25.415118 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 02:39:25.415290 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 02:39:25.434366 ignition[1013]: INFO : Ignition 2.19.0 Jan 24 02:39:25.434366 ignition[1013]: INFO : Stage: umount Jan 24 02:39:25.432650 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 02:39:25.443754 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 02:39:25.443754 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:39:25.443754 ignition[1013]: INFO : umount: umount passed Jan 24 02:39:25.443754 ignition[1013]: INFO : Ignition finished successfully Jan 24 02:39:25.438894 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 02:39:25.439089 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 02:39:25.442809 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 02:39:25.442961 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 02:39:25.445470 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 02:39:25.445620 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 02:39:25.447174 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 02:39:25.447277 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 02:39:25.448644 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 02:39:25.448715 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 02:39:25.450037 systemd[1]: Stopped target network.target - Network. Jan 24 02:39:25.451458 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 02:39:25.451534 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 02:39:25.453034 systemd[1]: Stopped target paths.target - Path Units. Jan 24 02:39:25.454463 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 02:39:25.458459 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 02:39:25.460073 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 02:39:25.461561 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 02:39:25.462933 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 02:39:25.463001 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 02:39:25.464574 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 02:39:25.464643 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 02:39:25.466088 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 02:39:25.466156 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 02:39:25.467509 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 02:39:25.467589 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 02:39:25.469093 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 02:39:25.469161 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 02:39:25.471125 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 02:39:25.473105 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 02:39:25.476663 systemd-networkd[777]: eth0: DHCPv6 lease lost Jan 24 02:39:25.479734 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 02:39:25.479951 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 02:39:25.483270 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 02:39:25.483353 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 02:39:25.491511 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 02:39:25.492287 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 02:39:25.493449 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 02:39:25.496946 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 02:39:25.498644 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 02:39:25.498818 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 02:39:25.506968 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 02:39:25.507200 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 02:39:25.512310 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 02:39:25.512612 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 02:39:25.515288 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 02:39:25.515402 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 02:39:25.517067 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 02:39:25.517130 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 02:39:25.518738 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 02:39:25.518809 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 02:39:25.520943 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 02:39:25.521011 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 02:39:25.522371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 02:39:25.522443 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 02:39:25.531604 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 02:39:25.532464 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 02:39:25.532544 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 02:39:25.538167 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 02:39:25.538239 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 02:39:25.539006 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 02:39:25.539072 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 02:39:25.541449 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 02:39:25.541531 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 02:39:25.542693 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 02:39:25.542763 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 02:39:25.544524 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 02:39:25.544588 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 02:39:25.546153 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 02:39:25.546220 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 02:39:25.548582 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 02:39:25.548721 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 02:39:25.550089 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 02:39:25.556573 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 02:39:25.569641 systemd[1]: Switching root. Jan 24 02:39:25.607885 systemd-journald[204]: Journal stopped Jan 24 02:39:27.246719 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Jan 24 02:39:27.246930 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 02:39:27.246975 kernel: SELinux: policy capability open_perms=1 Jan 24 02:39:27.247005 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 02:39:27.247032 kernel: SELinux: policy capability always_check_network=0 Jan 24 02:39:27.247072 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 02:39:27.247099 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 02:39:27.247126 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 02:39:27.247152 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 02:39:27.247181 kernel: audit: type=1403 audit(1769222365.891:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 02:39:27.247230 systemd[1]: Successfully loaded SELinux policy in 52.845ms. Jan 24 02:39:27.247296 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.183ms. Jan 24 02:39:27.251404 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 02:39:27.251459 systemd[1]: Detected virtualization kvm. Jan 24 02:39:27.251485 systemd[1]: Detected architecture x86-64. Jan 24 02:39:27.251517 systemd[1]: Detected first boot. Jan 24 02:39:27.251558 systemd[1]: Hostname set to . Jan 24 02:39:27.251588 systemd[1]: Initializing machine ID from VM UUID. Jan 24 02:39:27.251617 zram_generator::config[1055]: No configuration found. Jan 24 02:39:27.251646 systemd[1]: Populated /etc with preset unit settings. Jan 24 02:39:27.251674 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 02:39:27.251713 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 02:39:27.251737 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 02:39:27.251781 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 02:39:27.251806 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 02:39:27.251845 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 02:39:27.251883 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 02:39:27.251916 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 02:39:27.251946 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 02:39:27.251997 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 02:39:27.252022 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 02:39:27.252045 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 02:39:27.252066 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 02:39:27.252088 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 02:39:27.252110 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 02:39:27.252139 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 02:39:27.252170 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 02:39:27.252201 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 02:39:27.252236 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 02:39:27.252259 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 02:39:27.252288 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 02:39:27.252311 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 02:39:27.253508 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 02:39:27.253545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 02:39:27.255373 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 02:39:27.255416 systemd[1]: Reached target slices.target - Slice Units. Jan 24 02:39:27.255449 systemd[1]: Reached target swap.target - Swaps. Jan 24 02:39:27.255477 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 02:39:27.255509 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 02:39:27.255538 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 02:39:27.255567 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 02:39:27.255616 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 02:39:27.255668 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 02:39:27.255693 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 02:39:27.255714 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 02:39:27.255741 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 02:39:27.255763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:39:27.255790 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 02:39:27.255818 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 02:39:27.255868 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 02:39:27.255896 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 02:39:27.255918 systemd[1]: Reached target machines.target - Containers. Jan 24 02:39:27.255948 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 02:39:27.255971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 02:39:27.255992 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 02:39:27.256014 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 02:39:27.256036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 02:39:27.256057 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 02:39:27.256093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 02:39:27.256123 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 02:39:27.256145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 02:39:27.256168 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 02:39:27.256189 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 02:39:27.256211 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 02:39:27.256232 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 02:39:27.256253 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 02:39:27.256288 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 02:39:27.261393 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 02:39:27.261430 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 02:39:27.261453 kernel: fuse: init (API version 7.39) Jan 24 02:39:27.261476 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 02:39:27.261498 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 02:39:27.261529 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 02:39:27.261559 systemd[1]: Stopped verity-setup.service. Jan 24 02:39:27.261589 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:39:27.261627 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 02:39:27.261651 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 02:39:27.261679 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 02:39:27.261709 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 02:39:27.261732 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 02:39:27.261771 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 02:39:27.261795 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 02:39:27.261816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 02:39:27.261844 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 02:39:27.261880 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 02:39:27.261904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 02:39:27.261942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 02:39:27.261965 kernel: loop: module loaded Jan 24 02:39:27.261993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 02:39:27.262016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 02:39:27.262039 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 02:39:27.262060 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 02:39:27.262081 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 02:39:27.262103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 02:39:27.262183 systemd-journald[1151]: Collecting audit messages is disabled. Jan 24 02:39:27.262254 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 02:39:27.262284 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 02:39:27.262308 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 02:39:27.267731 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 02:39:27.267773 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 02:39:27.267818 systemd-journald[1151]: Journal started Jan 24 02:39:27.267872 systemd-journald[1151]: Runtime Journal (/run/log/journal/eb3905c88dbd4da49f815f60c09a2d2b) is 4.7M, max 38.0M, 33.2M free. Jan 24 02:39:26.747755 systemd[1]: Queued start job for default target multi-user.target. Jan 24 02:39:26.769418 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 02:39:26.770191 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 02:39:27.277462 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 02:39:27.283185 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 02:39:27.283241 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 02:39:27.289357 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 02:39:27.289420 kernel: ACPI: bus type drm_connector registered Jan 24 02:39:27.297908 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 02:39:27.309466 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 02:39:27.318147 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 02:39:27.335401 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 02:39:27.340208 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 02:39:27.362354 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 02:39:27.369730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 02:39:27.384494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 02:39:27.396349 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 02:39:27.413398 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 02:39:27.422702 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 02:39:27.428522 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 02:39:27.428833 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 02:39:27.431084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 02:39:27.432717 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 02:39:27.440563 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 02:39:27.442065 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 02:39:27.463410 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 02:39:27.493355 kernel: loop0: detected capacity change from 0 to 140768 Jan 24 02:39:27.498563 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 02:39:27.524983 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 02:39:27.540354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 02:39:27.542694 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 02:39:27.551553 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 02:39:27.562615 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 02:39:27.570513 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 24 02:39:27.570541 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 24 02:39:27.581155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 02:39:27.591638 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 02:39:27.599661 kernel: loop1: detected capacity change from 0 to 142488 Jan 24 02:39:27.600513 systemd-journald[1151]: Time spent on flushing to /var/log/journal/eb3905c88dbd4da49f815f60c09a2d2b is 67.038ms for 1152 entries. Jan 24 02:39:27.600513 systemd-journald[1151]: System Journal (/var/log/journal/eb3905c88dbd4da49f815f60c09a2d2b) is 8.0M, max 584.8M, 576.8M free. Jan 24 02:39:27.692270 systemd-journald[1151]: Received client request to flush runtime journal. Jan 24 02:39:27.692352 kernel: loop2: detected capacity change from 0 to 229808 Jan 24 02:39:27.648178 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 02:39:27.649362 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 02:39:27.658079 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 02:39:27.706798 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 02:39:27.733272 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 02:39:27.743648 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 02:39:27.754346 kernel: loop3: detected capacity change from 0 to 8 Jan 24 02:39:27.794397 kernel: loop4: detected capacity change from 0 to 140768 Jan 24 02:39:27.828395 kernel: loop5: detected capacity change from 0 to 142488 Jan 24 02:39:27.842468 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 24 02:39:27.842498 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 24 02:39:27.862582 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 02:39:27.870347 kernel: loop6: detected capacity change from 0 to 229808 Jan 24 02:39:27.892362 kernel: loop7: detected capacity change from 0 to 8 Jan 24 02:39:27.908945 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 24 02:39:27.910527 (sd-merge)[1215]: Merged extensions into '/usr'. Jan 24 02:39:27.926375 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 02:39:27.926501 systemd[1]: Reloading... Jan 24 02:39:28.134387 zram_generator::config[1243]: No configuration found. Jan 24 02:39:28.208665 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 02:39:28.372005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 02:39:28.443245 systemd[1]: Reloading finished in 515 ms. Jan 24 02:39:28.477183 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 02:39:28.478792 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 02:39:28.485053 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 02:39:28.494564 systemd[1]: Starting ensure-sysext.service... Jan 24 02:39:28.497548 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 02:39:28.506578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 02:39:28.517620 systemd[1]: Reloading requested from client PID 1299 ('systemctl') (unit ensure-sysext.service)... Jan 24 02:39:28.517647 systemd[1]: Reloading... Jan 24 02:39:28.550137 systemd-udevd[1301]: Using default interface naming scheme 'v255'. Jan 24 02:39:28.563585 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 02:39:28.564176 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 02:39:28.569285 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 02:39:28.570243 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 24 02:39:28.570401 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 24 02:39:28.581262 systemd-tmpfiles[1300]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 02:39:28.581283 systemd-tmpfiles[1300]: Skipping /boot Jan 24 02:39:28.635713 systemd-tmpfiles[1300]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 02:39:28.641411 systemd-tmpfiles[1300]: Skipping /boot Jan 24 02:39:28.699387 zram_generator::config[1343]: No configuration found. Jan 24 02:39:28.826373 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1338) Jan 24 02:39:28.940593 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 02:39:28.952785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 02:39:28.971455 kernel: ACPI: button: Power Button [PWRF] Jan 24 02:39:29.017726 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 02:39:29.062736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 02:39:29.064021 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 02:39:29.064252 systemd[1]: Reloading finished in 546 ms. Jan 24 02:39:29.072402 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 02:39:29.083913 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 02:39:29.099499 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 02:39:29.099772 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 02:39:29.109535 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 02:39:29.114710 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 02:39:29.209074 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:39:29.225685 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 02:39:29.234218 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 02:39:29.235983 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 02:39:29.242698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 02:39:29.247663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 02:39:29.260685 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 02:39:29.262565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 02:39:29.270781 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 02:39:29.280698 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 02:39:29.293006 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 02:39:29.304933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 02:39:29.311606 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 02:39:29.313035 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:39:29.341070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:39:29.342515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 02:39:29.352829 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 02:39:29.354915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 02:39:29.366685 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 02:39:29.385830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 02:39:29.386655 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:39:29.388383 systemd[1]: Finished ensure-sysext.service. Jan 24 02:39:29.407974 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 02:39:29.409872 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 02:39:29.410129 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 02:39:29.412680 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 02:39:29.412922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 02:39:29.416036 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 02:39:29.433961 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 02:39:29.446561 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 02:39:29.462655 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 02:39:29.465429 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 02:39:29.468709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 02:39:29.469094 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 02:39:29.472885 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 02:39:29.475154 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 02:39:29.476581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 02:39:29.492628 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 02:39:29.497957 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 02:39:29.529430 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 02:39:29.538557 augenrules[1451]: No rules Jan 24 02:39:29.540579 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 02:39:29.543555 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 02:39:29.545984 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 02:39:29.570966 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 02:39:29.575343 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 02:39:29.645933 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 02:39:29.650828 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 02:39:29.661551 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 02:39:29.679286 systemd-networkd[1419]: lo: Link UP Jan 24 02:39:29.679810 systemd-networkd[1419]: lo: Gained carrier Jan 24 02:39:29.682249 systemd-networkd[1419]: Enumeration completed Jan 24 02:39:29.682602 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 02:39:29.683289 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 02:39:29.683420 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 02:39:29.685187 systemd-networkd[1419]: eth0: Link UP Jan 24 02:39:29.685284 systemd-networkd[1419]: eth0: Gained carrier Jan 24 02:39:29.685406 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 02:39:29.700392 systemd-networkd[1419]: eth0: DHCPv4 address 10.230.33.130/30, gateway 10.230.33.129 acquired from 10.230.33.129 Jan 24 02:39:29.754340 systemd-resolved[1421]: Positive Trust Anchors: Jan 24 02:39:29.754368 systemd-resolved[1421]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 02:39:29.754415 systemd-resolved[1421]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 02:39:29.755579 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 02:39:29.757741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 02:39:29.759579 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 02:39:29.768093 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 02:39:29.768379 systemd-resolved[1421]: Using system hostname 'srv-aqhf7.gb1.brightbox.com'. Jan 24 02:39:29.768707 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 02:39:29.773235 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 02:39:29.775489 systemd[1]: Reached target network.target - Network. Jan 24 02:39:29.776158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 02:39:29.778443 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 02:39:29.779268 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 02:39:29.780768 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 02:39:29.783795 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 02:39:29.784676 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 02:39:29.785484 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 02:39:29.786258 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 02:39:29.786295 systemd[1]: Reached target paths.target - Path Units. Jan 24 02:39:29.787366 systemd[1]: Reached target timers.target - Timer Units. Jan 24 02:39:29.789120 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 02:39:29.791711 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 02:39:29.798537 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 02:39:29.800287 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 02:39:29.801431 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 02:39:29.802993 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 02:39:29.803728 systemd[1]: Reached target basic.target - Basic System. Jan 24 02:39:29.804664 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 02:39:29.804724 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 02:39:29.816493 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 02:39:29.819229 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 02:39:29.823534 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 02:39:29.830453 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 02:39:29.833548 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 02:39:29.834976 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 02:39:29.846555 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 02:39:29.850617 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 02:39:29.858549 jq[1481]: false Jan 24 02:39:29.864510 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 02:39:29.867648 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 02:39:29.878536 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 02:39:29.881713 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 02:39:29.882411 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 02:39:29.890534 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 02:39:29.894466 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 02:39:29.900814 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 02:39:29.901407 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 02:39:29.903038 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 02:39:29.903266 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 02:39:29.927251 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 02:39:29.928689 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 02:39:29.940115 update_engine[1497]: I20260124 02:39:29.940006 1497 main.cc:92] Flatcar Update Engine starting Jan 24 02:39:29.943307 dbus-daemon[1480]: [system] SELinux support is enabled Jan 24 02:39:29.943579 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 02:39:29.949919 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 02:39:29.949977 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 02:39:29.952874 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 02:39:29.955615 jq[1498]: true Jan 24 02:39:29.952916 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 02:39:29.959056 extend-filesystems[1484]: Found loop4 Jan 24 02:39:29.959056 extend-filesystems[1484]: Found loop5 Jan 24 02:39:29.959056 extend-filesystems[1484]: Found loop6 Jan 24 02:39:29.959056 extend-filesystems[1484]: Found loop7 Jan 24 02:39:29.959056 extend-filesystems[1484]: Found vda Jan 24 02:39:29.959056 extend-filesystems[1484]: Found vda1 Jan 24 02:39:29.959056 extend-filesystems[1484]: Found vda2 Jan 24 02:39:29.959056 extend-filesystems[1484]: Found vda3 Jan 24 02:39:29.959056 extend-filesystems[1484]: Found usr Jan 24 02:39:29.959056 extend-filesystems[1484]: Found vda4 Jan 24 02:39:29.959056 extend-filesystems[1484]: Found vda6 Jan 24 02:39:29.959056 extend-filesystems[1484]: Found vda7 Jan 24 02:39:29.959056 extend-filesystems[1484]: Found vda9 Jan 24 02:39:29.959056 extend-filesystems[1484]: Checking size of /dev/vda9 Jan 24 02:39:29.962499 dbus-daemon[1480]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1419 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 02:39:30.010532 tar[1500]: linux-amd64/LICENSE Jan 24 02:39:30.010532 tar[1500]: linux-amd64/helm Jan 24 02:39:30.010951 update_engine[1497]: I20260124 02:39:29.965584 1497 update_check_scheduler.cc:74] Next update check in 2m46s Jan 24 02:39:29.970441 systemd[1]: Started update-engine.service - Update Engine. Jan 24 02:39:29.991311 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 02:39:30.000642 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 02:39:30.008640 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 02:39:30.019917 extend-filesystems[1484]: Resized partition /dev/vda9 Jan 24 02:39:30.028509 extend-filesystems[1520]: resize2fs 1.47.1 (20-May-2024) Jan 24 02:39:30.041276 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 24 02:39:30.041421 jq[1512]: true Jan 24 02:39:30.080523 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1330) Jan 24 02:39:30.132437 systemd-timesyncd[1438]: Contacted time server 213.5.132.231:123 (0.flatcar.pool.ntp.org). Jan 24 02:39:30.133734 systemd-timesyncd[1438]: Initial clock synchronization to Sat 2026-01-24 02:39:30.519052 UTC. Jan 24 02:39:30.185945 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 02:39:30.241506 systemd-logind[1491]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 02:39:30.241559 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 02:39:30.242556 systemd-logind[1491]: New seat seat0. Jan 24 02:39:30.245470 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 02:39:30.296804 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Jan 24 02:39:30.302512 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 02:39:30.314718 systemd[1]: Starting sshkeys.service... Jan 24 02:39:30.329481 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 02:39:30.341289 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 24 02:39:30.339893 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 02:39:30.361586 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 02:39:30.362265 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 02:39:30.362578 dbus-daemon[1480]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1516 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 02:39:30.372565 extend-filesystems[1520]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 02:39:30.372565 extend-filesystems[1520]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 24 02:39:30.372565 extend-filesystems[1520]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 24 02:39:30.381237 extend-filesystems[1484]: Resized filesystem in /dev/vda9 Jan 24 02:39:30.375255 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 02:39:30.380458 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 02:39:30.380795 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 02:39:30.432233 polkitd[1552]: Started polkitd version 121 Jan 24 02:39:30.444865 polkitd[1552]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 02:39:30.447483 polkitd[1552]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 02:39:30.450627 polkitd[1552]: Finished loading, compiling and executing 2 rules Jan 24 02:39:30.455246 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 02:39:30.456500 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 02:39:30.459709 polkitd[1552]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 02:39:30.492052 systemd-hostnamed[1516]: Hostname set to (static) Jan 24 02:39:30.554356 containerd[1514]: time="2026-01-24T02:39:30.553469558Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 02:39:30.632720 containerd[1514]: time="2026-01-24T02:39:30.631306943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 02:39:30.640192 containerd[1514]: time="2026-01-24T02:39:30.640141849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 02:39:30.640353 containerd[1514]: time="2026-01-24T02:39:30.640292861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 02:39:30.640518 containerd[1514]: time="2026-01-24T02:39:30.640490164Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 02:39:30.641190 containerd[1514]: time="2026-01-24T02:39:30.641161109Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 02:39:30.641452 containerd[1514]: time="2026-01-24T02:39:30.641422428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 02:39:30.642089 containerd[1514]: time="2026-01-24T02:39:30.641952977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 02:39:30.642089 containerd[1514]: time="2026-01-24T02:39:30.641986137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 02:39:30.643713 containerd[1514]: time="2026-01-24T02:39:30.642741067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 02:39:30.643713 containerd[1514]: time="2026-01-24T02:39:30.642786132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 02:39:30.643713 containerd[1514]: time="2026-01-24T02:39:30.642823107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 02:39:30.643713 containerd[1514]: time="2026-01-24T02:39:30.642843807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 02:39:30.643713 containerd[1514]: time="2026-01-24T02:39:30.642970622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 02:39:30.644401 containerd[1514]: time="2026-01-24T02:39:30.644373735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 02:39:30.645006 containerd[1514]: time="2026-01-24T02:39:30.644974312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 02:39:30.645118 containerd[1514]: time="2026-01-24T02:39:30.645093969Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 02:39:30.646094 containerd[1514]: time="2026-01-24T02:39:30.645691098Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 02:39:30.646094 containerd[1514]: time="2026-01-24T02:39:30.645816588Z" level=info msg="metadata content store policy set" policy=shared Jan 24 02:39:30.650299 containerd[1514]: time="2026-01-24T02:39:30.650268531Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 02:39:30.650686 containerd[1514]: time="2026-01-24T02:39:30.650657725Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 02:39:30.650846 containerd[1514]: time="2026-01-24T02:39:30.650820703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 02:39:30.652420 containerd[1514]: time="2026-01-24T02:39:30.650929079Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 02:39:30.652420 containerd[1514]: time="2026-01-24T02:39:30.650973415Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 02:39:30.652420 containerd[1514]: time="2026-01-24T02:39:30.651167399Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 02:39:30.652870 containerd[1514]: time="2026-01-24T02:39:30.652837080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 02:39:30.653157 containerd[1514]: time="2026-01-24T02:39:30.653129596Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 02:39:30.653748 containerd[1514]: time="2026-01-24T02:39:30.653713544Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 02:39:30.653880 containerd[1514]: time="2026-01-24T02:39:30.653854909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 02:39:30.653977 containerd[1514]: time="2026-01-24T02:39:30.653952692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 02:39:30.654079 containerd[1514]: time="2026-01-24T02:39:30.654054153Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 02:39:30.654183 containerd[1514]: time="2026-01-24T02:39:30.654158509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 02:39:30.654389 containerd[1514]: time="2026-01-24T02:39:30.654361026Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.654773614Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.654809862Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.654830264Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.654848100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.654887921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.654927498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.654952814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.654973342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.654992358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.655032499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.655063460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.655087022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.655115142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.655812 containerd[1514]: time="2026-01-24T02:39:30.655146942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.656242 containerd[1514]: time="2026-01-24T02:39:30.655167149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.656242 containerd[1514]: time="2026-01-24T02:39:30.655186574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.656242 containerd[1514]: time="2026-01-24T02:39:30.655205711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.656242 containerd[1514]: time="2026-01-24T02:39:30.655229408Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 02:39:30.656242 containerd[1514]: time="2026-01-24T02:39:30.655275495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.656242 containerd[1514]: time="2026-01-24T02:39:30.655309652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.656242 containerd[1514]: time="2026-01-24T02:39:30.655346837Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 02:39:30.659107 containerd[1514]: time="2026-01-24T02:39:30.656925192Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 02:39:30.659107 containerd[1514]: time="2026-01-24T02:39:30.657095282Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 02:39:30.659107 containerd[1514]: time="2026-01-24T02:39:30.657119075Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 02:39:30.659107 containerd[1514]: time="2026-01-24T02:39:30.657161694Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 02:39:30.659107 containerd[1514]: time="2026-01-24T02:39:30.657182026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.659107 containerd[1514]: time="2026-01-24T02:39:30.657231781Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 02:39:30.659107 containerd[1514]: time="2026-01-24T02:39:30.657258440Z" level=info msg="NRI interface is disabled by configuration." Jan 24 02:39:30.659107 containerd[1514]: time="2026-01-24T02:39:30.657276559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 02:39:30.659426 containerd[1514]: time="2026-01-24T02:39:30.657781186Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 02:39:30.659426 containerd[1514]: time="2026-01-24T02:39:30.657882611Z" level=info msg="Connect containerd service" Jan 24 02:39:30.659426 containerd[1514]: time="2026-01-24T02:39:30.657929717Z" level=info msg="using legacy CRI server" Jan 24 02:39:30.659426 containerd[1514]: time="2026-01-24T02:39:30.657944579Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 02:39:30.659426 containerd[1514]: time="2026-01-24T02:39:30.658144658Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 02:39:30.666614 containerd[1514]: time="2026-01-24T02:39:30.666570905Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 02:39:30.666878 containerd[1514]: time="2026-01-24T02:39:30.666790425Z" level=info msg="Start subscribing containerd event" Jan 24 02:39:30.667291 containerd[1514]: time="2026-01-24T02:39:30.667253436Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 02:39:30.667392 containerd[1514]: time="2026-01-24T02:39:30.667366278Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 02:39:30.667495 containerd[1514]: time="2026-01-24T02:39:30.667468297Z" level=info msg="Start recovering state" Jan 24 02:39:30.667740 containerd[1514]: time="2026-01-24T02:39:30.667714761Z" level=info msg="Start event monitor" Jan 24 02:39:30.668209 containerd[1514]: time="2026-01-24T02:39:30.668180627Z" level=info msg="Start snapshots syncer" Jan 24 02:39:30.668706 containerd[1514]: time="2026-01-24T02:39:30.668281121Z" level=info msg="Start cni network conf syncer for default" Jan 24 02:39:30.668706 containerd[1514]: time="2026-01-24T02:39:30.668305587Z" level=info msg="Start streaming server" Jan 24 02:39:30.668964 containerd[1514]: time="2026-01-24T02:39:30.668937186Z" level=info msg="containerd successfully booted in 0.120873s" Jan 24 02:39:30.669044 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 02:39:30.778471 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 02:39:30.809262 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 02:39:30.819891 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 02:39:30.829896 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 02:39:30.830232 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 02:39:30.838826 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 02:39:30.854740 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 02:39:30.865008 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 02:39:30.876923 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 02:39:30.879371 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 02:39:30.982989 tar[1500]: linux-amd64/README.md Jan 24 02:39:30.996838 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 02:39:31.471342 systemd-networkd[1419]: eth0: Gained IPv6LL Jan 24 02:39:31.474846 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 02:39:31.477934 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 02:39:31.495478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:39:31.498640 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 02:39:31.531871 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 02:39:32.556524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:39:32.569940 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 02:39:32.979205 systemd-networkd[1419]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8860:24:19ff:fee6:2182/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8860:24:19ff:fee6:2182/64 assigned by NDisc. Jan 24 02:39:32.979219 systemd-networkd[1419]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 24 02:39:33.243645 kubelet[1603]: E0124 02:39:33.243449 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 02:39:33.247339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 02:39:33.247779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 02:39:33.248533 systemd[1]: kubelet.service: Consumed 1.124s CPU time. Jan 24 02:39:34.349512 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 02:39:34.368032 systemd[1]: Started sshd@0-10.230.33.130:22-101.47.140.255:34262.service - OpenSSH per-connection server daemon (101.47.140.255:34262). Jan 24 02:39:35.699906 systemd[1]: Started sshd@1-10.230.33.130:22-20.161.92.111:58742.service - OpenSSH per-connection server daemon (20.161.92.111:58742). Jan 24 02:39:35.823530 sshd[1615]: Invalid user wireguard from 101.47.140.255 port 34262 Jan 24 02:39:35.941689 login[1582]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 02:39:35.943481 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 02:39:35.963667 systemd-logind[1491]: New session 2 of user core. Jan 24 02:39:35.968487 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 02:39:35.975135 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 02:39:35.980139 systemd-logind[1491]: New session 1 of user core. Jan 24 02:39:36.006311 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 02:39:36.014443 sshd[1615]: Received disconnect from 101.47.140.255 port 34262:11: Bye Bye [preauth] Jan 24 02:39:36.014443 sshd[1615]: Disconnected from invalid user wireguard 101.47.140.255 port 34262 [preauth] Jan 24 02:39:36.015998 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 02:39:36.016698 systemd[1]: sshd@0-10.230.33.130:22-101.47.140.255:34262.service: Deactivated successfully. Jan 24 02:39:36.029782 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 02:39:36.188573 systemd[1626]: Queued start job for default target default.target. Jan 24 02:39:36.202819 systemd[1626]: Created slice app.slice - User Application Slice. Jan 24 02:39:36.202878 systemd[1626]: Reached target paths.target - Paths. Jan 24 02:39:36.202915 systemd[1626]: Reached target timers.target - Timers. Jan 24 02:39:36.205839 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 02:39:36.226375 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 02:39:36.226609 systemd[1626]: Reached target sockets.target - Sockets. Jan 24 02:39:36.226636 systemd[1626]: Reached target basic.target - Basic System. Jan 24 02:39:36.226739 systemd[1626]: Reached target default.target - Main User Target. Jan 24 02:39:36.226815 systemd[1626]: Startup finished in 187ms. Jan 24 02:39:36.227287 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 02:39:36.241677 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 02:39:36.243250 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 02:39:36.284337 sshd[1618]: Accepted publickey for core from 20.161.92.111 port 58742 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:39:36.289814 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:39:36.304432 systemd-logind[1491]: New session 3 of user core. Jan 24 02:39:36.310688 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 02:39:36.798889 systemd[1]: Started sshd@2-10.230.33.130:22-20.161.92.111:58758.service - OpenSSH per-connection server daemon (20.161.92.111:58758). Jan 24 02:39:36.926854 coreos-metadata[1479]: Jan 24 02:39:36.926 WARN failed to locate config-drive, using the metadata service API instead Jan 24 02:39:37.002260 coreos-metadata[1479]: Jan 24 02:39:37.002 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 24 02:39:37.012221 coreos-metadata[1479]: Jan 24 02:39:37.012 INFO Fetch failed with 404: resource not found Jan 24 02:39:37.012221 coreos-metadata[1479]: Jan 24 02:39:37.012 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 24 02:39:37.012816 coreos-metadata[1479]: Jan 24 02:39:37.012 INFO Fetch successful Jan 24 02:39:37.013100 coreos-metadata[1479]: Jan 24 02:39:37.013 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 24 02:39:37.027048 coreos-metadata[1479]: Jan 24 02:39:37.026 INFO Fetch successful Jan 24 02:39:37.027400 coreos-metadata[1479]: Jan 24 02:39:37.027 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 24 02:39:37.044267 coreos-metadata[1479]: Jan 24 02:39:37.044 INFO Fetch successful Jan 24 02:39:37.044702 coreos-metadata[1479]: Jan 24 02:39:37.044 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 24 02:39:37.061748 coreos-metadata[1479]: Jan 24 02:39:37.061 INFO Fetch successful Jan 24 02:39:37.061950 coreos-metadata[1479]: Jan 24 02:39:37.061 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 24 02:39:37.082985 coreos-metadata[1479]: Jan 24 02:39:37.082 INFO Fetch successful Jan 24 02:39:37.109823 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 02:39:37.111689 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 02:39:37.391864 sshd[1662]: Accepted publickey for core from 20.161.92.111 port 58758 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:39:37.393957 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:39:37.402916 systemd-logind[1491]: New session 4 of user core. Jan 24 02:39:37.411739 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 02:39:37.487857 coreos-metadata[1549]: Jan 24 02:39:37.487 WARN failed to locate config-drive, using the metadata service API instead Jan 24 02:39:37.512584 coreos-metadata[1549]: Jan 24 02:39:37.512 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 24 02:39:37.537085 coreos-metadata[1549]: Jan 24 02:39:37.537 INFO Fetch successful Jan 24 02:39:37.537204 coreos-metadata[1549]: Jan 24 02:39:37.537 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 02:39:37.561691 coreos-metadata[1549]: Jan 24 02:39:37.561 INFO Fetch successful Jan 24 02:39:37.564325 unknown[1549]: wrote ssh authorized keys file for user: core Jan 24 02:39:37.586511 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Jan 24 02:39:37.587458 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 02:39:37.591351 systemd[1]: Finished sshkeys.service. Jan 24 02:39:37.592804 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 02:39:37.595445 systemd[1]: Startup finished in 1.472s (kernel) + 15.136s (initrd) + 11.754s (userspace) = 28.364s. Jan 24 02:39:37.811531 sshd[1662]: pam_unix(sshd:session): session closed for user core Jan 24 02:39:37.816269 systemd-logind[1491]: Session 4 logged out. Waiting for processes to exit. Jan 24 02:39:37.816702 systemd[1]: sshd@2-10.230.33.130:22-20.161.92.111:58758.service: Deactivated successfully. Jan 24 02:39:37.818980 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 02:39:37.821265 systemd-logind[1491]: Removed session 4. Jan 24 02:39:37.920699 systemd[1]: Started sshd@3-10.230.33.130:22-20.161.92.111:58760.service - OpenSSH per-connection server daemon (20.161.92.111:58760). Jan 24 02:39:38.523294 sshd[1682]: Accepted publickey for core from 20.161.92.111 port 58760 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:39:38.525548 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:39:38.532356 systemd-logind[1491]: New session 5 of user core. Jan 24 02:39:38.542682 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 02:39:38.965238 sshd[1682]: pam_unix(sshd:session): session closed for user core Jan 24 02:39:38.970943 systemd[1]: sshd@3-10.230.33.130:22-20.161.92.111:58760.service: Deactivated successfully. Jan 24 02:39:38.973448 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 02:39:38.974525 systemd-logind[1491]: Session 5 logged out. Waiting for processes to exit. Jan 24 02:39:38.976329 systemd-logind[1491]: Removed session 5. Jan 24 02:39:39.076734 systemd[1]: Started sshd@4-10.230.33.130:22-20.161.92.111:58764.service - OpenSSH per-connection server daemon (20.161.92.111:58764). Jan 24 02:39:39.676049 sshd[1689]: Accepted publickey for core from 20.161.92.111 port 58764 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:39:39.678139 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:39:39.684460 systemd-logind[1491]: New session 6 of user core. Jan 24 02:39:39.692569 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 02:39:40.100609 sshd[1689]: pam_unix(sshd:session): session closed for user core Jan 24 02:39:40.105961 systemd[1]: sshd@4-10.230.33.130:22-20.161.92.111:58764.service: Deactivated successfully. Jan 24 02:39:40.108105 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 02:39:40.109051 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Jan 24 02:39:40.110552 systemd-logind[1491]: Removed session 6. Jan 24 02:39:40.209324 systemd[1]: Started sshd@5-10.230.33.130:22-20.161.92.111:58772.service - OpenSSH per-connection server daemon (20.161.92.111:58772). Jan 24 02:39:40.800468 sshd[1696]: Accepted publickey for core from 20.161.92.111 port 58772 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:39:40.802601 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:39:40.810867 systemd-logind[1491]: New session 7 of user core. Jan 24 02:39:40.816631 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 02:39:41.139414 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 02:39:41.139893 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 02:39:41.157471 sudo[1699]: pam_unix(sudo:session): session closed for user root Jan 24 02:39:41.267163 sshd[1696]: pam_unix(sshd:session): session closed for user core Jan 24 02:39:41.272583 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Jan 24 02:39:41.273895 systemd[1]: sshd@5-10.230.33.130:22-20.161.92.111:58772.service: Deactivated successfully. Jan 24 02:39:41.276627 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 02:39:41.278210 systemd-logind[1491]: Removed session 7. Jan 24 02:39:41.367893 systemd[1]: Started sshd@6-10.230.33.130:22-20.161.92.111:58774.service - OpenSSH per-connection server daemon (20.161.92.111:58774). Jan 24 02:39:41.959192 sshd[1704]: Accepted publickey for core from 20.161.92.111 port 58774 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:39:41.961471 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:39:41.969316 systemd-logind[1491]: New session 8 of user core. Jan 24 02:39:41.979705 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 02:39:42.278680 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 02:39:42.279166 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 02:39:42.285100 sudo[1708]: pam_unix(sudo:session): session closed for user root Jan 24 02:39:42.293543 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 02:39:42.294026 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 02:39:42.313696 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 02:39:42.317569 auditctl[1711]: No rules Jan 24 02:39:42.318065 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 02:39:42.318399 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 02:39:42.333053 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 02:39:42.366995 augenrules[1729]: No rules Jan 24 02:39:42.367973 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 02:39:42.369764 sudo[1707]: pam_unix(sudo:session): session closed for user root Jan 24 02:39:42.476410 sshd[1704]: pam_unix(sshd:session): session closed for user core Jan 24 02:39:42.480770 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Jan 24 02:39:42.481378 systemd[1]: sshd@6-10.230.33.130:22-20.161.92.111:58774.service: Deactivated successfully. Jan 24 02:39:42.483766 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 02:39:42.486059 systemd-logind[1491]: Removed session 8. Jan 24 02:39:42.568768 systemd[1]: Started sshd@7-10.230.33.130:22-20.161.92.111:58296.service - OpenSSH per-connection server daemon (20.161.92.111:58296). Jan 24 02:39:43.134859 sshd[1737]: Accepted publickey for core from 20.161.92.111 port 58296 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:39:43.137531 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:39:43.144887 systemd-logind[1491]: New session 9 of user core. Jan 24 02:39:43.151611 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 02:39:43.282616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 02:39:43.289606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:39:43.292542 systemd[1]: Started sshd@8-10.230.33.130:22-157.245.70.174:53012.service - OpenSSH per-connection server daemon (157.245.70.174:53012). Jan 24 02:39:43.452605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:39:43.455239 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 02:39:43.455767 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 02:39:43.465953 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 02:39:43.546741 sshd[1742]: Connection closed by authenticating user root 157.245.70.174 port 53012 [preauth] Jan 24 02:39:43.549009 systemd[1]: sshd@8-10.230.33.130:22-157.245.70.174:53012.service: Deactivated successfully. Jan 24 02:39:43.569777 kubelet[1752]: E0124 02:39:43.569640 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 02:39:43.574770 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 02:39:43.575048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 02:39:43.985032 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 02:39:43.986926 (dockerd)[1775]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 02:39:44.424263 dockerd[1775]: time="2026-01-24T02:39:44.424053250Z" level=info msg="Starting up" Jan 24 02:39:44.554850 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3347986171-merged.mount: Deactivated successfully. Jan 24 02:39:44.588681 dockerd[1775]: time="2026-01-24T02:39:44.588602246Z" level=info msg="Loading containers: start." Jan 24 02:39:44.725388 kernel: Initializing XFRM netlink socket Jan 24 02:39:44.846009 systemd-networkd[1419]: docker0: Link UP Jan 24 02:39:44.866462 dockerd[1775]: time="2026-01-24T02:39:44.866415768Z" level=info msg="Loading containers: done." Jan 24 02:39:44.886594 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2917350130-merged.mount: Deactivated successfully. Jan 24 02:39:44.888195 dockerd[1775]: time="2026-01-24T02:39:44.887631954Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 02:39:44.888195 dockerd[1775]: time="2026-01-24T02:39:44.887807094Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 02:39:44.888195 dockerd[1775]: time="2026-01-24T02:39:44.888100129Z" level=info msg="Daemon has completed initialization" Jan 24 02:39:44.928616 dockerd[1775]: time="2026-01-24T02:39:44.928428791Z" level=info msg="API listen on /run/docker.sock" Jan 24 02:39:44.928969 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 02:39:46.169675 containerd[1514]: time="2026-01-24T02:39:46.169563211Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 24 02:39:46.904646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2218810465.mount: Deactivated successfully. Jan 24 02:39:48.880398 containerd[1514]: time="2026-01-24T02:39:48.880181952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:48.882001 containerd[1514]: time="2026-01-24T02:39:48.881934669Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114720" Jan 24 02:39:48.885384 containerd[1514]: time="2026-01-24T02:39:48.883511167Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:48.890000 containerd[1514]: time="2026-01-24T02:39:48.889282483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:48.891145 containerd[1514]: time="2026-01-24T02:39:48.891094744Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.72139426s" Jan 24 02:39:48.891248 containerd[1514]: time="2026-01-24T02:39:48.891203679Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 24 02:39:48.893861 containerd[1514]: time="2026-01-24T02:39:48.893730421Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 24 02:39:51.581630 systemd[1]: Started sshd@9-10.230.33.130:22-159.223.6.232:41798.service - OpenSSH per-connection server daemon (159.223.6.232:41798). Jan 24 02:39:51.741377 sshd[1983]: Invalid user mysql from 159.223.6.232 port 41798 Jan 24 02:39:51.759549 sshd[1983]: Connection closed by invalid user mysql 159.223.6.232 port 41798 [preauth] Jan 24 02:39:51.763221 systemd[1]: sshd@9-10.230.33.130:22-159.223.6.232:41798.service: Deactivated successfully. Jan 24 02:39:52.585759 containerd[1514]: time="2026-01-24T02:39:52.585481869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:52.588198 containerd[1514]: time="2026-01-24T02:39:52.588044127Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016789" Jan 24 02:39:52.589571 containerd[1514]: time="2026-01-24T02:39:52.589521008Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:52.597088 containerd[1514]: time="2026-01-24T02:39:52.596997285Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 3.703185924s" Jan 24 02:39:52.597242 containerd[1514]: time="2026-01-24T02:39:52.597126581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 24 02:39:52.597406 containerd[1514]: time="2026-01-24T02:39:52.597360112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:52.600703 containerd[1514]: time="2026-01-24T02:39:52.599904789Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 24 02:39:53.799799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 02:39:53.808637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:39:54.233258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:39:54.251413 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 02:39:54.338720 kubelet[1995]: E0124 02:39:54.338636 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 02:39:54.341966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 02:39:54.342241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 02:39:55.442363 containerd[1514]: time="2026-01-24T02:39:55.440680967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:55.443136 containerd[1514]: time="2026-01-24T02:39:55.442636327Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158110" Jan 24 02:39:55.444001 containerd[1514]: time="2026-01-24T02:39:55.443962740Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:55.450461 containerd[1514]: time="2026-01-24T02:39:55.450385608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:55.453848 containerd[1514]: time="2026-01-24T02:39:55.453713630Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.853752076s" Jan 24 02:39:55.453848 containerd[1514]: time="2026-01-24T02:39:55.453781979Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 24 02:39:55.455960 containerd[1514]: time="2026-01-24T02:39:55.455510776Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 24 02:39:57.456122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595293248.mount: Deactivated successfully. Jan 24 02:39:58.286719 containerd[1514]: time="2026-01-24T02:39:58.286511040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:58.290415 containerd[1514]: time="2026-01-24T02:39:58.290348447Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930104" Jan 24 02:39:58.298077 containerd[1514]: time="2026-01-24T02:39:58.296576660Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:58.303015 containerd[1514]: time="2026-01-24T02:39:58.302971534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:39:58.303954 containerd[1514]: time="2026-01-24T02:39:58.303909655Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.848347025s" Jan 24 02:39:58.304034 containerd[1514]: time="2026-01-24T02:39:58.303959844Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 24 02:39:58.305519 containerd[1514]: time="2026-01-24T02:39:58.305456595Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 24 02:39:58.874474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681084627.mount: Deactivated successfully. Jan 24 02:40:00.751605 containerd[1514]: time="2026-01-24T02:40:00.751400863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:00.754397 containerd[1514]: time="2026-01-24T02:40:00.753503134Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jan 24 02:40:00.756356 containerd[1514]: time="2026-01-24T02:40:00.756016402Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:00.761445 containerd[1514]: time="2026-01-24T02:40:00.761409686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:00.764168 containerd[1514]: time="2026-01-24T02:40:00.764102270Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.45859711s" Jan 24 02:40:00.764283 containerd[1514]: time="2026-01-24T02:40:00.764212292Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 24 02:40:00.765902 containerd[1514]: time="2026-01-24T02:40:00.765579072Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 02:40:01.434977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount550091263.mount: Deactivated successfully. Jan 24 02:40:01.445989 containerd[1514]: time="2026-01-24T02:40:01.445896673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:01.448146 containerd[1514]: time="2026-01-24T02:40:01.447818302Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 24 02:40:01.450341 containerd[1514]: time="2026-01-24T02:40:01.448957957Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:01.454742 containerd[1514]: time="2026-01-24T02:40:01.454693106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:01.458499 containerd[1514]: time="2026-01-24T02:40:01.458458397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 692.829767ms" Jan 24 02:40:01.458617 containerd[1514]: time="2026-01-24T02:40:01.458508201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 02:40:01.459942 containerd[1514]: time="2026-01-24T02:40:01.459905356Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 24 02:40:02.186110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount582496861.mount: Deactivated successfully. Jan 24 02:40:03.011968 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 02:40:04.548685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 02:40:04.558711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:40:05.463569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:40:05.479849 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 02:40:05.614358 kubelet[2133]: E0124 02:40:05.612954 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 02:40:05.617034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 02:40:05.617796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 02:40:07.072823 containerd[1514]: time="2026-01-24T02:40:07.072739689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:07.074805 containerd[1514]: time="2026-01-24T02:40:07.074562525Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926235" Jan 24 02:40:07.077349 containerd[1514]: time="2026-01-24T02:40:07.075703661Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:07.080227 containerd[1514]: time="2026-01-24T02:40:07.080178767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:07.082086 containerd[1514]: time="2026-01-24T02:40:07.082041958Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.622090855s" Jan 24 02:40:07.082173 containerd[1514]: time="2026-01-24T02:40:07.082094604Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 24 02:40:13.562182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:40:13.578959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:40:13.620985 systemd[1]: Reloading requested from client PID 2172 ('systemctl') (unit session-9.scope)... Jan 24 02:40:13.621026 systemd[1]: Reloading... Jan 24 02:40:13.828356 zram_generator::config[2214]: No configuration found. Jan 24 02:40:13.980722 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 02:40:14.093887 systemd[1]: Reloading finished in 472 ms. Jan 24 02:40:14.176622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:40:14.180855 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:40:14.183593 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 02:40:14.184105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:40:14.190648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:40:14.352405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:40:14.373894 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 02:40:14.537491 kubelet[2280]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 02:40:14.538207 kubelet[2280]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 02:40:14.538207 kubelet[2280]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 02:40:14.540581 kubelet[2280]: I0124 02:40:14.540363 2280 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 02:40:14.692780 systemd[1]: Started sshd@10-10.230.33.130:22-157.245.70.174:60978.service - OpenSSH per-connection server daemon (157.245.70.174:60978). Jan 24 02:40:15.013000 sshd[2287]: Connection closed by authenticating user root 157.245.70.174 port 60978 [preauth] Jan 24 02:40:15.016737 systemd[1]: sshd@10-10.230.33.130:22-157.245.70.174:60978.service: Deactivated successfully. Jan 24 02:40:15.241575 update_engine[1497]: I20260124 02:40:15.240252 1497 update_attempter.cc:509] Updating boot flags... Jan 24 02:40:15.302395 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2298) Jan 24 02:40:15.506361 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2301) Jan 24 02:40:15.518357 kubelet[2280]: I0124 02:40:15.516731 2280 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 02:40:15.518543 kubelet[2280]: I0124 02:40:15.518519 2280 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 02:40:15.519729 kubelet[2280]: I0124 02:40:15.518963 2280 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 02:40:15.605039 kubelet[2280]: I0124 02:40:15.604908 2280 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 02:40:15.607227 kubelet[2280]: E0124 02:40:15.607149 2280 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.33.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.33.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 02:40:15.624598 kubelet[2280]: E0124 02:40:15.624538 2280 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 02:40:15.624598 kubelet[2280]: I0124 02:40:15.624600 2280 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 02:40:15.633746 kubelet[2280]: I0124 02:40:15.633699 2280 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 02:40:15.638799 kubelet[2280]: I0124 02:40:15.638723 2280 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 02:40:15.645146 kubelet[2280]: I0124 02:40:15.638775 2280 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-aqhf7.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 02:40:15.645146 kubelet[2280]: I0124 02:40:15.644831 2280 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 02:40:15.645146 kubelet[2280]: I0124 02:40:15.644859 2280 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 02:40:15.646642 kubelet[2280]: I0124 02:40:15.646619 2280 state_mem.go:36] "Initialized new in-memory state store" Jan 24 02:40:15.651148 kubelet[2280]: I0124 02:40:15.651066 2280 kubelet.go:480] "Attempting to sync node with API server" Jan 24 02:40:15.651244 kubelet[2280]: I0124 02:40:15.651152 2280 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 02:40:15.651244 kubelet[2280]: I0124 02:40:15.651221 2280 kubelet.go:386] "Adding apiserver pod source" Jan 24 02:40:15.653181 kubelet[2280]: I0124 02:40:15.653107 2280 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 02:40:15.659780 kubelet[2280]: E0124 02:40:15.659169 2280 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.33.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-aqhf7.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.33.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 02:40:15.662547 kubelet[2280]: E0124 02:40:15.662516 2280 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.33.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.33.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 02:40:15.662815 kubelet[2280]: I0124 02:40:15.662778 2280 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 02:40:15.672435 kubelet[2280]: I0124 02:40:15.672153 2280 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 02:40:15.673950 kubelet[2280]: W0124 02:40:15.673913 2280 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 02:40:15.681530 kubelet[2280]: I0124 02:40:15.681488 2280 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 02:40:15.681672 kubelet[2280]: I0124 02:40:15.681616 2280 server.go:1289] "Started kubelet" Jan 24 02:40:15.682375 kubelet[2280]: I0124 02:40:15.681812 2280 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 02:40:15.687281 kubelet[2280]: I0124 02:40:15.685885 2280 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 02:40:15.687281 kubelet[2280]: I0124 02:40:15.686194 2280 server.go:317] "Adding debug handlers to kubelet server" Jan 24 02:40:15.687281 kubelet[2280]: I0124 02:40:15.686720 2280 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 02:40:15.693855 kubelet[2280]: E0124 02:40:15.690579 2280 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.33.130:6443/api/v1/namespaces/default/events\": dial tcp 10.230.33.130:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-aqhf7.gb1.brightbox.com.188d8a69cf5b483c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-aqhf7.gb1.brightbox.com,UID:srv-aqhf7.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-aqhf7.gb1.brightbox.com,},FirstTimestamp:2026-01-24 02:40:15.681538108 +0000 UTC m=+1.300222843,LastTimestamp:2026-01-24 02:40:15.681538108 +0000 UTC m=+1.300222843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-aqhf7.gb1.brightbox.com,}" Jan 24 02:40:15.696720 kubelet[2280]: I0124 02:40:15.696690 2280 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 02:40:15.697551 kubelet[2280]: I0124 02:40:15.697495 2280 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 02:40:15.697870 kubelet[2280]: I0124 02:40:15.697851 2280 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 02:40:15.702502 kubelet[2280]: E0124 02:40:15.701755 2280 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-aqhf7.gb1.brightbox.com\" not found" Jan 24 02:40:15.705767 kubelet[2280]: I0124 02:40:15.705742 2280 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 02:40:15.706073 kubelet[2280]: I0124 02:40:15.706036 2280 reconciler.go:26] "Reconciler: start to sync state" Jan 24 02:40:15.708554 kubelet[2280]: E0124 02:40:15.708506 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.33.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-aqhf7.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.33.130:6443: connect: connection refused" interval="200ms" Jan 24 02:40:15.708886 kubelet[2280]: I0124 02:40:15.708854 2280 factory.go:223] Registration of the systemd container factory successfully Jan 24 02:40:15.709009 kubelet[2280]: I0124 02:40:15.708982 2280 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 02:40:15.710694 kubelet[2280]: E0124 02:40:15.710570 2280 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.33.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.33.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 02:40:15.712667 kubelet[2280]: E0124 02:40:15.712064 2280 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 02:40:15.713503 kubelet[2280]: I0124 02:40:15.713466 2280 factory.go:223] Registration of the containerd container factory successfully Jan 24 02:40:15.748446 kubelet[2280]: I0124 02:40:15.748399 2280 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 02:40:15.748446 kubelet[2280]: I0124 02:40:15.748443 2280 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 02:40:15.748698 kubelet[2280]: I0124 02:40:15.748473 2280 state_mem.go:36] "Initialized new in-memory state store" Jan 24 02:40:15.749957 kubelet[2280]: I0124 02:40:15.749899 2280 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 02:40:15.751097 kubelet[2280]: I0124 02:40:15.751069 2280 policy_none.go:49] "None policy: Start" Jan 24 02:40:15.751211 kubelet[2280]: I0124 02:40:15.751119 2280 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 02:40:15.751211 kubelet[2280]: I0124 02:40:15.751146 2280 state_mem.go:35] "Initializing new in-memory state store" Jan 24 02:40:15.755104 kubelet[2280]: I0124 02:40:15.754526 2280 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 02:40:15.755104 kubelet[2280]: I0124 02:40:15.754610 2280 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 02:40:15.755104 kubelet[2280]: I0124 02:40:15.754653 2280 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 02:40:15.755104 kubelet[2280]: I0124 02:40:15.754677 2280 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 02:40:15.755104 kubelet[2280]: E0124 02:40:15.754760 2280 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 02:40:15.758090 kubelet[2280]: E0124 02:40:15.758055 2280 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.33.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.33.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 02:40:15.771884 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 02:40:15.787487 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 02:40:15.792157 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 02:40:15.802181 kubelet[2280]: E0124 02:40:15.802138 2280 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-aqhf7.gb1.brightbox.com\" not found" Jan 24 02:40:15.804726 kubelet[2280]: E0124 02:40:15.804689 2280 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 02:40:15.805207 kubelet[2280]: I0124 02:40:15.804988 2280 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 02:40:15.805207 kubelet[2280]: I0124 02:40:15.805024 2280 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 02:40:15.807119 kubelet[2280]: I0124 02:40:15.805576 2280 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 02:40:15.808826 kubelet[2280]: E0124 02:40:15.808775 2280 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 02:40:15.809093 kubelet[2280]: E0124 02:40:15.809072 2280 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-aqhf7.gb1.brightbox.com\" not found" Jan 24 02:40:15.875284 systemd[1]: Created slice kubepods-burstable-pod3c423ee011e41bbdcdcbfdb3e5e89f1f.slice - libcontainer container kubepods-burstable-pod3c423ee011e41bbdcdcbfdb3e5e89f1f.slice. Jan 24 02:40:15.896104 kubelet[2280]: E0124 02:40:15.895291 2280 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aqhf7.gb1.brightbox.com\" not found" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.901227 systemd[1]: Created slice kubepods-burstable-pod1b775354d61e6492f38cd3ed21fdfef2.slice - libcontainer container kubepods-burstable-pod1b775354d61e6492f38cd3ed21fdfef2.slice. Jan 24 02:40:15.907308 kubelet[2280]: I0124 02:40:15.907266 2280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be85fa86c71e141aa98d8e0341e37644-kubeconfig\") pod \"kube-scheduler-srv-aqhf7.gb1.brightbox.com\" (UID: \"be85fa86c71e141aa98d8e0341e37644\") " pod="kube-system/kube-scheduler-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.907573 kubelet[2280]: I0124 02:40:15.907543 2280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c423ee011e41bbdcdcbfdb3e5e89f1f-ca-certs\") pod \"kube-apiserver-srv-aqhf7.gb1.brightbox.com\" (UID: \"3c423ee011e41bbdcdcbfdb3e5e89f1f\") " pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.907703 kubelet[2280]: I0124 02:40:15.907679 2280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c423ee011e41bbdcdcbfdb3e5e89f1f-k8s-certs\") pod \"kube-apiserver-srv-aqhf7.gb1.brightbox.com\" (UID: \"3c423ee011e41bbdcdcbfdb3e5e89f1f\") " pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.907823 kubelet[2280]: I0124 02:40:15.907799 2280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b775354d61e6492f38cd3ed21fdfef2-ca-certs\") pod \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" (UID: \"1b775354d61e6492f38cd3ed21fdfef2\") " pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.908263 kubelet[2280]: I0124 02:40:15.907939 2280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b775354d61e6492f38cd3ed21fdfef2-flexvolume-dir\") pod \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" (UID: \"1b775354d61e6492f38cd3ed21fdfef2\") " pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.908263 kubelet[2280]: I0124 02:40:15.907981 2280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c423ee011e41bbdcdcbfdb3e5e89f1f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-aqhf7.gb1.brightbox.com\" (UID: \"3c423ee011e41bbdcdcbfdb3e5e89f1f\") " pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.908263 kubelet[2280]: I0124 02:40:15.908011 2280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b775354d61e6492f38cd3ed21fdfef2-k8s-certs\") pod \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" (UID: \"1b775354d61e6492f38cd3ed21fdfef2\") " pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.908263 kubelet[2280]: I0124 02:40:15.908038 2280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b775354d61e6492f38cd3ed21fdfef2-kubeconfig\") pod \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" (UID: \"1b775354d61e6492f38cd3ed21fdfef2\") " pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.909652 kubelet[2280]: I0124 02:40:15.908067 2280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b775354d61e6492f38cd3ed21fdfef2-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" (UID: \"1b775354d61e6492f38cd3ed21fdfef2\") " pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.909652 kubelet[2280]: I0124 02:40:15.908182 2280 kubelet_node_status.go:75] "Attempting to register node" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.910260 kubelet[2280]: E0124 02:40:15.910225 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.33.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-aqhf7.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.33.130:6443: connect: connection refused" interval="400ms" Jan 24 02:40:15.911271 kubelet[2280]: E0124 02:40:15.910918 2280 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aqhf7.gb1.brightbox.com\" not found" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.911271 kubelet[2280]: E0124 02:40:15.911116 2280 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.33.130:6443/api/v1/nodes\": dial tcp 10.230.33.130:6443: connect: connection refused" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:15.916912 systemd[1]: Created slice kubepods-burstable-podbe85fa86c71e141aa98d8e0341e37644.slice - libcontainer container kubepods-burstable-podbe85fa86c71e141aa98d8e0341e37644.slice. Jan 24 02:40:15.919635 kubelet[2280]: E0124 02:40:15.919605 2280 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aqhf7.gb1.brightbox.com\" not found" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:16.114739 kubelet[2280]: I0124 02:40:16.114691 2280 kubelet_node_status.go:75] "Attempting to register node" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:16.115496 kubelet[2280]: E0124 02:40:16.115444 2280 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.33.130:6443/api/v1/nodes\": dial tcp 10.230.33.130:6443: connect: connection refused" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:16.197803 containerd[1514]: time="2026-01-24T02:40:16.197599025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-aqhf7.gb1.brightbox.com,Uid:3c423ee011e41bbdcdcbfdb3e5e89f1f,Namespace:kube-system,Attempt:0,}" Jan 24 02:40:16.218225 containerd[1514]: time="2026-01-24T02:40:16.218149581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-aqhf7.gb1.brightbox.com,Uid:1b775354d61e6492f38cd3ed21fdfef2,Namespace:kube-system,Attempt:0,}" Jan 24 02:40:16.221623 containerd[1514]: time="2026-01-24T02:40:16.221560886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-aqhf7.gb1.brightbox.com,Uid:be85fa86c71e141aa98d8e0341e37644,Namespace:kube-system,Attempt:0,}" Jan 24 02:40:16.313496 kubelet[2280]: E0124 02:40:16.313395 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.33.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-aqhf7.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.33.130:6443: connect: connection refused" interval="800ms" Jan 24 02:40:16.519694 kubelet[2280]: I0124 02:40:16.519438 2280 kubelet_node_status.go:75] "Attempting to register node" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:16.520470 kubelet[2280]: E0124 02:40:16.520208 2280 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.33.130:6443/api/v1/nodes\": dial tcp 10.230.33.130:6443: connect: connection refused" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:16.642459 kubelet[2280]: E0124 02:40:16.642376 2280 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.33.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.33.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 02:40:16.785106 kubelet[2280]: E0124 02:40:16.784820 2280 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.33.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-aqhf7.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.33.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 02:40:16.815194 kubelet[2280]: E0124 02:40:16.814385 2280 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.33.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.33.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 02:40:16.824644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073351278.mount: Deactivated successfully. Jan 24 02:40:16.842211 containerd[1514]: time="2026-01-24T02:40:16.842122034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 02:40:16.843620 containerd[1514]: time="2026-01-24T02:40:16.843552661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 02:40:16.843708 containerd[1514]: time="2026-01-24T02:40:16.843667655Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 02:40:16.844910 containerd[1514]: time="2026-01-24T02:40:16.844781698Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 02:40:16.845105 containerd[1514]: time="2026-01-24T02:40:16.845044469Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 24 02:40:16.846610 containerd[1514]: time="2026-01-24T02:40:16.846169791Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 02:40:16.846610 containerd[1514]: time="2026-01-24T02:40:16.846559422Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 02:40:16.850570 containerd[1514]: time="2026-01-24T02:40:16.850497955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 02:40:16.853296 containerd[1514]: time="2026-01-24T02:40:16.853252450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.728845ms" Jan 24 02:40:16.858212 containerd[1514]: time="2026-01-24T02:40:16.858174173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.362421ms" Jan 24 02:40:16.864585 containerd[1514]: time="2026-01-24T02:40:16.864508071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 642.813053ms" Jan 24 02:40:17.080936 kubelet[2280]: E0124 02:40:17.080741 2280 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.33.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.33.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 02:40:17.088849 containerd[1514]: time="2026-01-24T02:40:17.088170879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:40:17.088849 containerd[1514]: time="2026-01-24T02:40:17.088713788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:40:17.088849 containerd[1514]: time="2026-01-24T02:40:17.088775918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:17.091307 containerd[1514]: time="2026-01-24T02:40:17.090860172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:40:17.091307 containerd[1514]: time="2026-01-24T02:40:17.090955136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:40:17.091307 containerd[1514]: time="2026-01-24T02:40:17.091024502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:17.091307 containerd[1514]: time="2026-01-24T02:40:17.091206534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:17.093757 containerd[1514]: time="2026-01-24T02:40:17.093138770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:40:17.093757 containerd[1514]: time="2026-01-24T02:40:17.093240449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:40:17.093757 containerd[1514]: time="2026-01-24T02:40:17.093265782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:17.093757 containerd[1514]: time="2026-01-24T02:40:17.093399439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:17.098652 containerd[1514]: time="2026-01-24T02:40:17.097788280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:17.114846 kubelet[2280]: E0124 02:40:17.114752 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.33.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-aqhf7.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.33.130:6443: connect: connection refused" interval="1.6s" Jan 24 02:40:17.145552 systemd[1]: Started cri-containerd-9fea7e10a2b33fea1d2e8a4a1486977c5a8240d1474d76dd89f2f7e3c97d8a98.scope - libcontainer container 9fea7e10a2b33fea1d2e8a4a1486977c5a8240d1474d76dd89f2f7e3c97d8a98. Jan 24 02:40:17.152238 systemd[1]: Started cri-containerd-0d3cdfe55e4e2e7d1a36576893bf508b0834b4bfea844d5a3a4a324acd77bf2f.scope - libcontainer container 0d3cdfe55e4e2e7d1a36576893bf508b0834b4bfea844d5a3a4a324acd77bf2f. Jan 24 02:40:17.166537 systemd[1]: Started cri-containerd-96fe0579d3d2d9fbf1a580bb184ba4c4b84557c7c6f9bcadc44e66c4a27526ee.scope - libcontainer container 96fe0579d3d2d9fbf1a580bb184ba4c4b84557c7c6f9bcadc44e66c4a27526ee. Jan 24 02:40:17.269516 containerd[1514]: time="2026-01-24T02:40:17.269440564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-aqhf7.gb1.brightbox.com,Uid:1b775354d61e6492f38cd3ed21fdfef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fea7e10a2b33fea1d2e8a4a1486977c5a8240d1474d76dd89f2f7e3c97d8a98\"" Jan 24 02:40:17.289735 containerd[1514]: time="2026-01-24T02:40:17.289665237Z" level=info msg="CreateContainer within sandbox \"9fea7e10a2b33fea1d2e8a4a1486977c5a8240d1474d76dd89f2f7e3c97d8a98\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 02:40:17.309305 containerd[1514]: time="2026-01-24T02:40:17.309264222Z" level=info msg="CreateContainer within sandbox \"9fea7e10a2b33fea1d2e8a4a1486977c5a8240d1474d76dd89f2f7e3c97d8a98\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"10b562c547d0bc5a8ff3e12e490ca3b5fb55cf546a2c6c9efdf92438a0199a59\"" Jan 24 02:40:17.310581 containerd[1514]: time="2026-01-24T02:40:17.310530072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-aqhf7.gb1.brightbox.com,Uid:3c423ee011e41bbdcdcbfdb3e5e89f1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"96fe0579d3d2d9fbf1a580bb184ba4c4b84557c7c6f9bcadc44e66c4a27526ee\"" Jan 24 02:40:17.311126 containerd[1514]: time="2026-01-24T02:40:17.311093871Z" level=info msg="StartContainer for \"10b562c547d0bc5a8ff3e12e490ca3b5fb55cf546a2c6c9efdf92438a0199a59\"" Jan 24 02:40:17.319526 containerd[1514]: time="2026-01-24T02:40:17.319429868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-aqhf7.gb1.brightbox.com,Uid:be85fa86c71e141aa98d8e0341e37644,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d3cdfe55e4e2e7d1a36576893bf508b0834b4bfea844d5a3a4a324acd77bf2f\"" Jan 24 02:40:17.323566 containerd[1514]: time="2026-01-24T02:40:17.323526539Z" level=info msg="CreateContainer within sandbox \"96fe0579d3d2d9fbf1a580bb184ba4c4b84557c7c6f9bcadc44e66c4a27526ee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 02:40:17.327948 containerd[1514]: time="2026-01-24T02:40:17.327902282Z" level=info msg="CreateContainer within sandbox \"0d3cdfe55e4e2e7d1a36576893bf508b0834b4bfea844d5a3a4a324acd77bf2f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 02:40:17.328293 kubelet[2280]: I0124 02:40:17.328251 2280 kubelet_node_status.go:75] "Attempting to register node" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:17.329043 kubelet[2280]: E0124 02:40:17.328998 2280 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.33.130:6443/api/v1/nodes\": dial tcp 10.230.33.130:6443: connect: connection refused" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:17.336895 kubelet[2280]: E0124 02:40:17.335643 2280 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.33.130:6443/api/v1/namespaces/default/events\": dial tcp 10.230.33.130:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-aqhf7.gb1.brightbox.com.188d8a69cf5b483c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-aqhf7.gb1.brightbox.com,UID:srv-aqhf7.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-aqhf7.gb1.brightbox.com,},FirstTimestamp:2026-01-24 02:40:15.681538108 +0000 UTC m=+1.300222843,LastTimestamp:2026-01-24 02:40:15.681538108 +0000 UTC m=+1.300222843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-aqhf7.gb1.brightbox.com,}" Jan 24 02:40:17.365576 systemd[1]: Started cri-containerd-10b562c547d0bc5a8ff3e12e490ca3b5fb55cf546a2c6c9efdf92438a0199a59.scope - libcontainer container 10b562c547d0bc5a8ff3e12e490ca3b5fb55cf546a2c6c9efdf92438a0199a59. Jan 24 02:40:17.386064 containerd[1514]: time="2026-01-24T02:40:17.385723523Z" level=info msg="CreateContainer within sandbox \"96fe0579d3d2d9fbf1a580bb184ba4c4b84557c7c6f9bcadc44e66c4a27526ee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb614bb0f39e94332a07aefa2d826bb40b416360aa72eeff8d7e5a73d888bf66\"" Jan 24 02:40:17.387101 containerd[1514]: time="2026-01-24T02:40:17.387040447Z" level=info msg="CreateContainer within sandbox \"0d3cdfe55e4e2e7d1a36576893bf508b0834b4bfea844d5a3a4a324acd77bf2f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"08dda16c34869408d6f3a66cae6326b74b2b87e9bc46b3d0609a204450dcd144\"" Jan 24 02:40:17.387804 containerd[1514]: time="2026-01-24T02:40:17.387760045Z" level=info msg="StartContainer for \"08dda16c34869408d6f3a66cae6326b74b2b87e9bc46b3d0609a204450dcd144\"" Jan 24 02:40:17.390921 containerd[1514]: time="2026-01-24T02:40:17.389980194Z" level=info msg="StartContainer for \"bb614bb0f39e94332a07aefa2d826bb40b416360aa72eeff8d7e5a73d888bf66\"" Jan 24 02:40:17.448976 systemd[1]: Started cri-containerd-bb614bb0f39e94332a07aefa2d826bb40b416360aa72eeff8d7e5a73d888bf66.scope - libcontainer container bb614bb0f39e94332a07aefa2d826bb40b416360aa72eeff8d7e5a73d888bf66. Jan 24 02:40:17.468161 systemd[1]: Started cri-containerd-08dda16c34869408d6f3a66cae6326b74b2b87e9bc46b3d0609a204450dcd144.scope - libcontainer container 08dda16c34869408d6f3a66cae6326b74b2b87e9bc46b3d0609a204450dcd144. Jan 24 02:40:17.476715 containerd[1514]: time="2026-01-24T02:40:17.476639302Z" level=info msg="StartContainer for \"10b562c547d0bc5a8ff3e12e490ca3b5fb55cf546a2c6c9efdf92438a0199a59\" returns successfully" Jan 24 02:40:17.572543 containerd[1514]: time="2026-01-24T02:40:17.572442999Z" level=info msg="StartContainer for \"08dda16c34869408d6f3a66cae6326b74b2b87e9bc46b3d0609a204450dcd144\" returns successfully" Jan 24 02:40:17.582011 containerd[1514]: time="2026-01-24T02:40:17.581629057Z" level=info msg="StartContainer for \"bb614bb0f39e94332a07aefa2d826bb40b416360aa72eeff8d7e5a73d888bf66\" returns successfully" Jan 24 02:40:17.672543 kubelet[2280]: E0124 02:40:17.670308 2280 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.33.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.33.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 02:40:17.783965 kubelet[2280]: E0124 02:40:17.783915 2280 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aqhf7.gb1.brightbox.com\" not found" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:17.785097 kubelet[2280]: E0124 02:40:17.784570 2280 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aqhf7.gb1.brightbox.com\" not found" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:17.787388 kubelet[2280]: E0124 02:40:17.786728 2280 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aqhf7.gb1.brightbox.com\" not found" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:18.791718 kubelet[2280]: E0124 02:40:18.791601 2280 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aqhf7.gb1.brightbox.com\" not found" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:18.794127 kubelet[2280]: E0124 02:40:18.793679 2280 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aqhf7.gb1.brightbox.com\" not found" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:18.933297 kubelet[2280]: I0124 02:40:18.933238 2280 kubelet_node_status.go:75] "Attempting to register node" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:20.686736 kubelet[2280]: I0124 02:40:20.686518 2280 kubelet_node_status.go:78] "Successfully registered node" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:20.686736 kubelet[2280]: E0124 02:40:20.686578 2280 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-aqhf7.gb1.brightbox.com\": node \"srv-aqhf7.gb1.brightbox.com\" not found" Jan 24 02:40:20.702426 kubelet[2280]: I0124 02:40:20.702393 2280 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:20.768353 kubelet[2280]: E0124 02:40:20.766655 2280 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-aqhf7.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:20.768353 kubelet[2280]: I0124 02:40:20.766705 2280 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:20.773387 kubelet[2280]: E0124 02:40:20.773110 2280 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:20.773387 kubelet[2280]: I0124 02:40:20.773157 2280 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:20.779608 kubelet[2280]: E0124 02:40:20.779539 2280 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-aqhf7.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:20.785078 kubelet[2280]: E0124 02:40:20.785029 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 24 02:40:21.168496 kubelet[2280]: I0124 02:40:21.168445 2280 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:21.176246 kubelet[2280]: E0124 02:40:21.175930 2280 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-aqhf7.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:21.663876 kubelet[2280]: I0124 02:40:21.663441 2280 apiserver.go:52] "Watching apiserver" Jan 24 02:40:21.706385 kubelet[2280]: I0124 02:40:21.705999 2280 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 02:40:21.789195 kubelet[2280]: I0124 02:40:21.788036 2280 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:21.798644 kubelet[2280]: I0124 02:40:21.798289 2280 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 02:40:22.450091 systemd[1]: Reloading requested from client PID 2589 ('systemctl') (unit session-9.scope)... Jan 24 02:40:22.450117 systemd[1]: Reloading... Jan 24 02:40:22.581362 zram_generator::config[2628]: No configuration found. Jan 24 02:40:22.768960 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 02:40:22.903241 systemd[1]: Reloading finished in 452 ms. Jan 24 02:40:22.967903 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:40:22.982968 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 02:40:22.983422 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:40:22.983508 systemd[1]: kubelet.service: Consumed 1.571s CPU time, 131.0M memory peak, 0B memory swap peak. Jan 24 02:40:22.995774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:40:23.261730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:40:23.273868 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 02:40:23.369266 kubelet[2692]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 02:40:23.369266 kubelet[2692]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 02:40:23.369266 kubelet[2692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 02:40:23.369957 kubelet[2692]: I0124 02:40:23.369385 2692 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 02:40:23.381379 kubelet[2692]: I0124 02:40:23.381198 2692 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 02:40:23.381379 kubelet[2692]: I0124 02:40:23.381244 2692 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 02:40:23.381637 kubelet[2692]: I0124 02:40:23.381582 2692 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 02:40:23.383781 kubelet[2692]: I0124 02:40:23.383753 2692 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 02:40:23.396425 kubelet[2692]: I0124 02:40:23.395144 2692 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 02:40:23.410818 kubelet[2692]: E0124 02:40:23.410764 2692 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 02:40:23.410818 kubelet[2692]: I0124 02:40:23.410817 2692 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 02:40:23.417968 kubelet[2692]: I0124 02:40:23.417905 2692 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 02:40:23.418666 kubelet[2692]: I0124 02:40:23.418596 2692 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 02:40:23.418965 kubelet[2692]: I0124 02:40:23.418683 2692 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-aqhf7.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 02:40:23.419146 kubelet[2692]: I0124 02:40:23.418971 2692 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 02:40:23.419146 kubelet[2692]: I0124 02:40:23.418986 2692 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 02:40:23.420759 kubelet[2692]: I0124 02:40:23.420352 2692 state_mem.go:36] "Initialized new in-memory state store" Jan 24 02:40:23.422389 kubelet[2692]: I0124 02:40:23.422360 2692 kubelet.go:480] "Attempting to sync node with API server" Jan 24 02:40:23.422496 kubelet[2692]: I0124 02:40:23.422401 2692 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 02:40:23.422496 kubelet[2692]: I0124 02:40:23.422446 2692 kubelet.go:386] "Adding apiserver pod source" Jan 24 02:40:23.422496 kubelet[2692]: I0124 02:40:23.422486 2692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 02:40:23.432517 kubelet[2692]: I0124 02:40:23.429812 2692 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 02:40:23.434407 kubelet[2692]: I0124 02:40:23.432629 2692 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 02:40:23.448458 kubelet[2692]: I0124 02:40:23.447726 2692 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 02:40:23.448458 kubelet[2692]: I0124 02:40:23.447804 2692 server.go:1289] "Started kubelet" Jan 24 02:40:23.464582 kubelet[2692]: I0124 02:40:23.464507 2692 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 02:40:23.466147 kubelet[2692]: I0124 02:40:23.465137 2692 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 02:40:23.466565 kubelet[2692]: I0124 02:40:23.466336 2692 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 02:40:23.468391 kubelet[2692]: I0124 02:40:23.468057 2692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 02:40:23.474947 kubelet[2692]: I0124 02:40:23.474911 2692 server.go:317] "Adding debug handlers to kubelet server" Jan 24 02:40:23.478482 kubelet[2692]: I0124 02:40:23.476629 2692 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 02:40:23.478482 kubelet[2692]: I0124 02:40:23.477657 2692 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 02:40:23.478482 kubelet[2692]: I0124 02:40:23.477918 2692 reconciler.go:26] "Reconciler: start to sync state" Jan 24 02:40:23.480413 kubelet[2692]: I0124 02:40:23.475130 2692 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 02:40:23.490809 kubelet[2692]: I0124 02:40:23.490757 2692 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 02:40:23.508610 kubelet[2692]: E0124 02:40:23.506240 2692 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 02:40:23.508610 kubelet[2692]: I0124 02:40:23.508125 2692 factory.go:223] Registration of the containerd container factory successfully Jan 24 02:40:23.508610 kubelet[2692]: I0124 02:40:23.508145 2692 factory.go:223] Registration of the systemd container factory successfully Jan 24 02:40:23.544585 kubelet[2692]: I0124 02:40:23.544272 2692 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 02:40:23.556809 kubelet[2692]: I0124 02:40:23.556462 2692 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 02:40:23.556809 kubelet[2692]: I0124 02:40:23.556504 2692 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 02:40:23.556809 kubelet[2692]: I0124 02:40:23.556574 2692 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 02:40:23.556809 kubelet[2692]: I0124 02:40:23.556590 2692 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 02:40:23.556809 kubelet[2692]: E0124 02:40:23.556663 2692 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 02:40:23.612375 kubelet[2692]: I0124 02:40:23.611879 2692 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 02:40:23.612375 kubelet[2692]: I0124 02:40:23.611905 2692 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 02:40:23.612375 kubelet[2692]: I0124 02:40:23.611942 2692 state_mem.go:36] "Initialized new in-memory state store" Jan 24 02:40:23.612375 kubelet[2692]: I0124 02:40:23.612162 2692 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 02:40:23.612375 kubelet[2692]: I0124 02:40:23.612182 2692 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 02:40:23.612375 kubelet[2692]: I0124 02:40:23.612209 2692 policy_none.go:49] "None policy: Start" Jan 24 02:40:23.612375 kubelet[2692]: I0124 02:40:23.612239 2692 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 02:40:23.612375 kubelet[2692]: I0124 02:40:23.612261 2692 state_mem.go:35] "Initializing new in-memory state store" Jan 24 02:40:23.613707 kubelet[2692]: I0124 02:40:23.613134 2692 state_mem.go:75] "Updated machine memory state" Jan 24 02:40:23.623740 kubelet[2692]: E0124 02:40:23.623051 2692 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 02:40:23.623740 kubelet[2692]: I0124 02:40:23.623298 2692 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 02:40:23.623740 kubelet[2692]: I0124 02:40:23.623315 2692 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 02:40:23.628509 kubelet[2692]: I0124 02:40:23.626498 2692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 02:40:23.635386 kubelet[2692]: E0124 02:40:23.635292 2692 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 02:40:23.658712 kubelet[2692]: I0124 02:40:23.658676 2692 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.662309 kubelet[2692]: I0124 02:40:23.662274 2692 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.662807 kubelet[2692]: I0124 02:40:23.662030 2692 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.674635 kubelet[2692]: I0124 02:40:23.674603 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 02:40:23.675155 kubelet[2692]: E0124 02:40:23.675009 2692 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.678699 kubelet[2692]: I0124 02:40:23.678584 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b775354d61e6492f38cd3ed21fdfef2-flexvolume-dir\") pod \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" (UID: \"1b775354d61e6492f38cd3ed21fdfef2\") " pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.680149 kubelet[2692]: I0124 02:40:23.679946 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b775354d61e6492f38cd3ed21fdfef2-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" (UID: \"1b775354d61e6492f38cd3ed21fdfef2\") " pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.680431 kubelet[2692]: I0124 02:40:23.680282 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be85fa86c71e141aa98d8e0341e37644-kubeconfig\") pod \"kube-scheduler-srv-aqhf7.gb1.brightbox.com\" (UID: \"be85fa86c71e141aa98d8e0341e37644\") " pod="kube-system/kube-scheduler-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.680793 kubelet[2692]: I0124 02:40:23.680574 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b775354d61e6492f38cd3ed21fdfef2-ca-certs\") pod \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" (UID: \"1b775354d61e6492f38cd3ed21fdfef2\") " pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.681053 kubelet[2692]: I0124 02:40:23.680999 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b775354d61e6492f38cd3ed21fdfef2-k8s-certs\") pod \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" (UID: \"1b775354d61e6492f38cd3ed21fdfef2\") " pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.682503 kubelet[2692]: I0124 02:40:23.681189 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b775354d61e6492f38cd3ed21fdfef2-kubeconfig\") pod \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" (UID: \"1b775354d61e6492f38cd3ed21fdfef2\") " pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.682915 kubelet[2692]: I0124 02:40:23.682748 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c423ee011e41bbdcdcbfdb3e5e89f1f-ca-certs\") pod \"kube-apiserver-srv-aqhf7.gb1.brightbox.com\" (UID: \"3c423ee011e41bbdcdcbfdb3e5e89f1f\") " pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.682915 kubelet[2692]: I0124 02:40:23.679810 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 02:40:23.683640 kubelet[2692]: I0124 02:40:23.679773 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 02:40:23.683807 kubelet[2692]: I0124 02:40:23.683756 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c423ee011e41bbdcdcbfdb3e5e89f1f-k8s-certs\") pod \"kube-apiserver-srv-aqhf7.gb1.brightbox.com\" (UID: \"3c423ee011e41bbdcdcbfdb3e5e89f1f\") " pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.685398 kubelet[2692]: I0124 02:40:23.684449 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c423ee011e41bbdcdcbfdb3e5e89f1f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-aqhf7.gb1.brightbox.com\" (UID: \"3c423ee011e41bbdcdcbfdb3e5e89f1f\") " pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.748387 kubelet[2692]: I0124 02:40:23.748314 2692 kubelet_node_status.go:75] "Attempting to register node" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.763377 kubelet[2692]: I0124 02:40:23.763348 2692 kubelet_node_status.go:124] "Node was previously registered" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:23.763647 kubelet[2692]: I0124 02:40:23.763618 2692 kubelet_node_status.go:78] "Successfully registered node" node="srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:24.426286 kubelet[2692]: I0124 02:40:24.425813 2692 apiserver.go:52] "Watching apiserver" Jan 24 02:40:24.478796 kubelet[2692]: I0124 02:40:24.478680 2692 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 02:40:24.588694 kubelet[2692]: I0124 02:40:24.588646 2692 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:24.589171 kubelet[2692]: I0124 02:40:24.589138 2692 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:24.598056 kubelet[2692]: I0124 02:40:24.597753 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 02:40:24.598056 kubelet[2692]: E0124 02:40:24.597817 2692 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-aqhf7.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:24.599391 kubelet[2692]: I0124 02:40:24.599368 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 02:40:24.599667 kubelet[2692]: E0124 02:40:24.599608 2692 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-aqhf7.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" Jan 24 02:40:24.639200 kubelet[2692]: I0124 02:40:24.639083 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-aqhf7.gb1.brightbox.com" podStartSLOduration=1.63904758 podStartE2EDuration="1.63904758s" podCreationTimestamp="2026-01-24 02:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 02:40:24.637812112 +0000 UTC m=+1.350282291" watchObservedRunningTime="2026-01-24 02:40:24.63904758 +0000 UTC m=+1.351517739" Jan 24 02:40:24.639630 kubelet[2692]: I0124 02:40:24.639219 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-aqhf7.gb1.brightbox.com" podStartSLOduration=3.639211719 podStartE2EDuration="3.639211719s" podCreationTimestamp="2026-01-24 02:40:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 02:40:24.625484314 +0000 UTC m=+1.337954485" watchObservedRunningTime="2026-01-24 02:40:24.639211719 +0000 UTC m=+1.351681887" Jan 24 02:40:28.417363 kubelet[2692]: I0124 02:40:28.417105 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-aqhf7.gb1.brightbox.com" podStartSLOduration=5.417071255 podStartE2EDuration="5.417071255s" podCreationTimestamp="2026-01-24 02:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 02:40:24.653334118 +0000 UTC m=+1.365804281" watchObservedRunningTime="2026-01-24 02:40:28.417071255 +0000 UTC m=+5.129541433" Jan 24 02:40:29.742808 kubelet[2692]: I0124 02:40:29.742688 2692 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 02:40:29.744546 containerd[1514]: time="2026-01-24T02:40:29.744315617Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 02:40:29.745042 kubelet[2692]: I0124 02:40:29.744980 2692 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 02:40:30.761794 systemd[1]: Created slice kubepods-besteffort-podeb10175e_aeb3_4670_a674_3847793b4513.slice - libcontainer container kubepods-besteffort-podeb10175e_aeb3_4670_a674_3847793b4513.slice. Jan 24 02:40:30.838171 kubelet[2692]: I0124 02:40:30.838103 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb10175e-aeb3-4670-a674-3847793b4513-lib-modules\") pod \"kube-proxy-s6xhl\" (UID: \"eb10175e-aeb3-4670-a674-3847793b4513\") " pod="kube-system/kube-proxy-s6xhl" Jan 24 02:40:30.838171 kubelet[2692]: I0124 02:40:30.838168 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb10175e-aeb3-4670-a674-3847793b4513-xtables-lock\") pod \"kube-proxy-s6xhl\" (UID: \"eb10175e-aeb3-4670-a674-3847793b4513\") " pod="kube-system/kube-proxy-s6xhl" Jan 24 02:40:30.838936 kubelet[2692]: I0124 02:40:30.838235 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf2zp\" (UniqueName: \"kubernetes.io/projected/eb10175e-aeb3-4670-a674-3847793b4513-kube-api-access-zf2zp\") pod \"kube-proxy-s6xhl\" (UID: \"eb10175e-aeb3-4670-a674-3847793b4513\") " pod="kube-system/kube-proxy-s6xhl" Jan 24 02:40:30.838936 kubelet[2692]: I0124 02:40:30.838277 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb10175e-aeb3-4670-a674-3847793b4513-kube-proxy\") pod \"kube-proxy-s6xhl\" (UID: \"eb10175e-aeb3-4670-a674-3847793b4513\") " pod="kube-system/kube-proxy-s6xhl" Jan 24 02:40:30.943654 systemd[1]: Created slice kubepods-besteffort-pod5467a93d_e255_4fc0_a6ca_4030e2cfb87a.slice - libcontainer container kubepods-besteffort-pod5467a93d_e255_4fc0_a6ca_4030e2cfb87a.slice. Jan 24 02:40:31.039470 kubelet[2692]: I0124 02:40:31.039207 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf4dk\" (UniqueName: \"kubernetes.io/projected/5467a93d-e255-4fc0-a6ca-4030e2cfb87a-kube-api-access-jf4dk\") pod \"tigera-operator-7dcd859c48-9hwmb\" (UID: \"5467a93d-e255-4fc0-a6ca-4030e2cfb87a\") " pod="tigera-operator/tigera-operator-7dcd859c48-9hwmb" Jan 24 02:40:31.039470 kubelet[2692]: I0124 02:40:31.039342 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5467a93d-e255-4fc0-a6ca-4030e2cfb87a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-9hwmb\" (UID: \"5467a93d-e255-4fc0-a6ca-4030e2cfb87a\") " pod="tigera-operator/tigera-operator-7dcd859c48-9hwmb" Jan 24 02:40:31.075531 containerd[1514]: time="2026-01-24T02:40:31.075440202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6xhl,Uid:eb10175e-aeb3-4670-a674-3847793b4513,Namespace:kube-system,Attempt:0,}" Jan 24 02:40:31.115708 containerd[1514]: time="2026-01-24T02:40:31.115552815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:40:31.116654 containerd[1514]: time="2026-01-24T02:40:31.116588984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:40:31.116993 containerd[1514]: time="2026-01-24T02:40:31.116787826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:31.117207 containerd[1514]: time="2026-01-24T02:40:31.117114366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:31.145688 systemd[1]: run-containerd-runc-k8s.io-c466b65ec9815aba97d8da24236b17784e7a4396c9554ba84f95fbe54067cbe6-runc.vRmRSc.mount: Deactivated successfully. Jan 24 02:40:31.161191 systemd[1]: Started cri-containerd-c466b65ec9815aba97d8da24236b17784e7a4396c9554ba84f95fbe54067cbe6.scope - libcontainer container c466b65ec9815aba97d8da24236b17784e7a4396c9554ba84f95fbe54067cbe6. Jan 24 02:40:31.205412 containerd[1514]: time="2026-01-24T02:40:31.205261235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6xhl,Uid:eb10175e-aeb3-4670-a674-3847793b4513,Namespace:kube-system,Attempt:0,} returns sandbox id \"c466b65ec9815aba97d8da24236b17784e7a4396c9554ba84f95fbe54067cbe6\"" Jan 24 02:40:31.214263 containerd[1514]: time="2026-01-24T02:40:31.214212269Z" level=info msg="CreateContainer within sandbox \"c466b65ec9815aba97d8da24236b17784e7a4396c9554ba84f95fbe54067cbe6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 02:40:31.248946 containerd[1514]: time="2026-01-24T02:40:31.248850542Z" level=info msg="CreateContainer within sandbox \"c466b65ec9815aba97d8da24236b17784e7a4396c9554ba84f95fbe54067cbe6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"af2d808c06634845e924a0c41506e9257b0bb89563ef4be3892c7ad8c31cfd4f\"" Jan 24 02:40:31.250975 containerd[1514]: time="2026-01-24T02:40:31.250365235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9hwmb,Uid:5467a93d-e255-4fc0-a6ca-4030e2cfb87a,Namespace:tigera-operator,Attempt:0,}" Jan 24 02:40:31.250975 containerd[1514]: time="2026-01-24T02:40:31.250605152Z" level=info msg="StartContainer for \"af2d808c06634845e924a0c41506e9257b0bb89563ef4be3892c7ad8c31cfd4f\"" Jan 24 02:40:31.303552 systemd[1]: Started cri-containerd-af2d808c06634845e924a0c41506e9257b0bb89563ef4be3892c7ad8c31cfd4f.scope - libcontainer container af2d808c06634845e924a0c41506e9257b0bb89563ef4be3892c7ad8c31cfd4f. Jan 24 02:40:31.308990 containerd[1514]: time="2026-01-24T02:40:31.308741278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:40:31.308990 containerd[1514]: time="2026-01-24T02:40:31.308856713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:40:31.308990 containerd[1514]: time="2026-01-24T02:40:31.308881295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:31.309495 containerd[1514]: time="2026-01-24T02:40:31.309038618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:31.349588 systemd[1]: Started cri-containerd-4b3ca83767bb7f37d4be07d42f2e689fb72dffa21899f72c7c92fc46d8c9a294.scope - libcontainer container 4b3ca83767bb7f37d4be07d42f2e689fb72dffa21899f72c7c92fc46d8c9a294. Jan 24 02:40:31.393024 containerd[1514]: time="2026-01-24T02:40:31.392961480Z" level=info msg="StartContainer for \"af2d808c06634845e924a0c41506e9257b0bb89563ef4be3892c7ad8c31cfd4f\" returns successfully" Jan 24 02:40:31.449071 containerd[1514]: time="2026-01-24T02:40:31.449015492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9hwmb,Uid:5467a93d-e255-4fc0-a6ca-4030e2cfb87a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4b3ca83767bb7f37d4be07d42f2e689fb72dffa21899f72c7c92fc46d8c9a294\"" Jan 24 02:40:31.454350 containerd[1514]: time="2026-01-24T02:40:31.453765364Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 02:40:31.630463 kubelet[2692]: I0124 02:40:31.628803 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s6xhl" podStartSLOduration=1.627880521 podStartE2EDuration="1.627880521s" podCreationTimestamp="2026-01-24 02:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 02:40:31.626850053 +0000 UTC m=+8.339320235" watchObservedRunningTime="2026-01-24 02:40:31.627880521 +0000 UTC m=+8.340350688" Jan 24 02:40:33.294602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208963172.mount: Deactivated successfully. Jan 24 02:40:34.342822 containerd[1514]: time="2026-01-24T02:40:34.342722565Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:34.345065 containerd[1514]: time="2026-01-24T02:40:34.344980565Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 02:40:34.346230 containerd[1514]: time="2026-01-24T02:40:34.346161102Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:34.349394 containerd[1514]: time="2026-01-24T02:40:34.349092617Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:34.350696 containerd[1514]: time="2026-01-24T02:40:34.350633302Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.896809542s" Jan 24 02:40:34.350843 containerd[1514]: time="2026-01-24T02:40:34.350815056Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 02:40:34.357440 containerd[1514]: time="2026-01-24T02:40:34.357038548Z" level=info msg="CreateContainer within sandbox \"4b3ca83767bb7f37d4be07d42f2e689fb72dffa21899f72c7c92fc46d8c9a294\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 02:40:34.374581 containerd[1514]: time="2026-01-24T02:40:34.374544374Z" level=info msg="CreateContainer within sandbox \"4b3ca83767bb7f37d4be07d42f2e689fb72dffa21899f72c7c92fc46d8c9a294\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"23552c388d5b05129da9ba4f07b6dc5911a20fa3e9a9286f7c52c9bc8f9022e9\"" Jan 24 02:40:34.376654 containerd[1514]: time="2026-01-24T02:40:34.376523285Z" level=info msg="StartContainer for \"23552c388d5b05129da9ba4f07b6dc5911a20fa3e9a9286f7c52c9bc8f9022e9\"" Jan 24 02:40:34.432537 systemd[1]: Started cri-containerd-23552c388d5b05129da9ba4f07b6dc5911a20fa3e9a9286f7c52c9bc8f9022e9.scope - libcontainer container 23552c388d5b05129da9ba4f07b6dc5911a20fa3e9a9286f7c52c9bc8f9022e9. Jan 24 02:40:34.469987 containerd[1514]: time="2026-01-24T02:40:34.469947454Z" level=info msg="StartContainer for \"23552c388d5b05129da9ba4f07b6dc5911a20fa3e9a9286f7c52c9bc8f9022e9\" returns successfully" Jan 24 02:40:34.638475 kubelet[2692]: I0124 02:40:34.638132 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-9hwmb" podStartSLOduration=1.737617918 podStartE2EDuration="4.638098113s" podCreationTimestamp="2026-01-24 02:40:30 +0000 UTC" firstStartedPulling="2026-01-24 02:40:31.451710566 +0000 UTC m=+8.164180724" lastFinishedPulling="2026-01-24 02:40:34.352190755 +0000 UTC m=+11.064660919" observedRunningTime="2026-01-24 02:40:34.637044271 +0000 UTC m=+11.349514441" watchObservedRunningTime="2026-01-24 02:40:34.638098113 +0000 UTC m=+11.350568290" Jan 24 02:40:38.705808 systemd[1]: Started sshd@11-10.230.33.130:22-159.223.6.232:57208.service - OpenSSH per-connection server daemon (159.223.6.232:57208). Jan 24 02:40:38.880182 sshd[3057]: Invalid user mysql from 159.223.6.232 port 57208 Jan 24 02:40:38.905824 sshd[3057]: Connection closed by invalid user mysql 159.223.6.232 port 57208 [preauth] Jan 24 02:40:38.907904 systemd[1]: sshd@11-10.230.33.130:22-159.223.6.232:57208.service: Deactivated successfully. Jan 24 02:40:42.212401 sudo[1750]: pam_unix(sudo:session): session closed for user root Jan 24 02:40:42.311161 sshd[1737]: pam_unix(sshd:session): session closed for user core Jan 24 02:40:42.319402 systemd[1]: sshd@7-10.230.33.130:22-20.161.92.111:58296.service: Deactivated successfully. Jan 24 02:40:42.326944 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 02:40:42.327388 systemd[1]: session-9.scope: Consumed 8.930s CPU time, 143.8M memory peak, 0B memory swap peak. Jan 24 02:40:42.330674 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Jan 24 02:40:42.334758 systemd-logind[1491]: Removed session 9. Jan 24 02:40:46.329728 systemd[1]: Started sshd@12-10.230.33.130:22-157.245.70.174:44378.service - OpenSSH per-connection server daemon (157.245.70.174:44378). Jan 24 02:40:46.477029 sshd[3100]: Connection closed by authenticating user root 157.245.70.174 port 44378 [preauth] Jan 24 02:40:46.474771 systemd[1]: sshd@12-10.230.33.130:22-157.245.70.174:44378.service: Deactivated successfully. Jan 24 02:40:49.202072 systemd[1]: Created slice kubepods-besteffort-pod59662d8e_c640_4505_a42f_9d9d8813c961.slice - libcontainer container kubepods-besteffort-pod59662d8e_c640_4505_a42f_9d9d8813c961.slice. Jan 24 02:40:49.264390 kubelet[2692]: I0124 02:40:49.264199 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbrhz\" (UniqueName: \"kubernetes.io/projected/59662d8e-c640-4505-a42f-9d9d8813c961-kube-api-access-qbrhz\") pod \"calico-typha-78bf7cf879-vxrxc\" (UID: \"59662d8e-c640-4505-a42f-9d9d8813c961\") " pod="calico-system/calico-typha-78bf7cf879-vxrxc" Jan 24 02:40:49.265488 kubelet[2692]: I0124 02:40:49.264861 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59662d8e-c640-4505-a42f-9d9d8813c961-tigera-ca-bundle\") pod \"calico-typha-78bf7cf879-vxrxc\" (UID: \"59662d8e-c640-4505-a42f-9d9d8813c961\") " pod="calico-system/calico-typha-78bf7cf879-vxrxc" Jan 24 02:40:49.265488 kubelet[2692]: I0124 02:40:49.264964 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/59662d8e-c640-4505-a42f-9d9d8813c961-typha-certs\") pod \"calico-typha-78bf7cf879-vxrxc\" (UID: \"59662d8e-c640-4505-a42f-9d9d8813c961\") " pod="calico-system/calico-typha-78bf7cf879-vxrxc" Jan 24 02:40:49.323602 systemd[1]: Created slice kubepods-besteffort-podec08f916_36c4_44f2_bd69_8289c082a254.slice - libcontainer container kubepods-besteffort-podec08f916_36c4_44f2_bd69_8289c082a254.slice. Jan 24 02:40:49.366031 kubelet[2692]: I0124 02:40:49.365946 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ec08f916-36c4-44f2-bd69-8289c082a254-cni-net-dir\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366031 kubelet[2692]: I0124 02:40:49.366031 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ec08f916-36c4-44f2-bd69-8289c082a254-flexvol-driver-host\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366397 kubelet[2692]: I0124 02:40:49.366064 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ec08f916-36c4-44f2-bd69-8289c082a254-policysync\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366397 kubelet[2692]: I0124 02:40:49.366092 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ec08f916-36c4-44f2-bd69-8289c082a254-var-run-calico\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366397 kubelet[2692]: I0124 02:40:49.366163 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec08f916-36c4-44f2-bd69-8289c082a254-var-lib-calico\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366397 kubelet[2692]: I0124 02:40:49.366220 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ec08f916-36c4-44f2-bd69-8289c082a254-cni-log-dir\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366397 kubelet[2692]: I0124 02:40:49.366250 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r5b7\" (UniqueName: \"kubernetes.io/projected/ec08f916-36c4-44f2-bd69-8289c082a254-kube-api-access-2r5b7\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366707 kubelet[2692]: I0124 02:40:49.366282 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ec08f916-36c4-44f2-bd69-8289c082a254-cni-bin-dir\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366707 kubelet[2692]: I0124 02:40:49.366309 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec08f916-36c4-44f2-bd69-8289c082a254-lib-modules\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366707 kubelet[2692]: I0124 02:40:49.366362 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec08f916-36c4-44f2-bd69-8289c082a254-tigera-ca-bundle\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366707 kubelet[2692]: I0124 02:40:49.366394 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec08f916-36c4-44f2-bd69-8289c082a254-xtables-lock\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.366707 kubelet[2692]: I0124 02:40:49.366435 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ec08f916-36c4-44f2-bd69-8289c082a254-node-certs\") pod \"calico-node-m65gq\" (UID: \"ec08f916-36c4-44f2-bd69-8289c082a254\") " pod="calico-system/calico-node-m65gq" Jan 24 02:40:49.439727 kubelet[2692]: E0124 02:40:49.439653 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:40:49.481470 kubelet[2692]: E0124 02:40:49.480725 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.481470 kubelet[2692]: W0124 02:40:49.480789 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.481470 kubelet[2692]: E0124 02:40:49.480868 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.512051 containerd[1514]: time="2026-01-24T02:40:49.511361950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78bf7cf879-vxrxc,Uid:59662d8e-c640-4505-a42f-9d9d8813c961,Namespace:calico-system,Attempt:0,}" Jan 24 02:40:49.543714 kubelet[2692]: E0124 02:40:49.542944 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.543714 kubelet[2692]: W0124 02:40:49.542981 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.543714 kubelet[2692]: E0124 02:40:49.543018 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.544422 kubelet[2692]: E0124 02:40:49.544151 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.544422 kubelet[2692]: W0124 02:40:49.544167 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.544422 kubelet[2692]: E0124 02:40:49.544183 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.547448 kubelet[2692]: E0124 02:40:49.545665 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.547448 kubelet[2692]: W0124 02:40:49.545686 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.547448 kubelet[2692]: E0124 02:40:49.545704 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.547448 kubelet[2692]: E0124 02:40:49.547303 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.547448 kubelet[2692]: W0124 02:40:49.547334 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.547985 kubelet[2692]: E0124 02:40:49.547352 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.553250 kubelet[2692]: E0124 02:40:49.550312 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.553250 kubelet[2692]: W0124 02:40:49.552988 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.553250 kubelet[2692]: E0124 02:40:49.553031 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.554083 kubelet[2692]: E0124 02:40:49.553929 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.554083 kubelet[2692]: W0124 02:40:49.553948 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.554083 kubelet[2692]: E0124 02:40:49.553965 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.564420 kubelet[2692]: E0124 02:40:49.560733 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.564420 kubelet[2692]: W0124 02:40:49.560781 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.564420 kubelet[2692]: E0124 02:40:49.560806 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.566031 kubelet[2692]: E0124 02:40:49.566004 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.566031 kubelet[2692]: W0124 02:40:49.566028 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.566206 kubelet[2692]: E0124 02:40:49.566047 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.567513 kubelet[2692]: E0124 02:40:49.567476 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.567513 kubelet[2692]: W0124 02:40:49.567497 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.567681 kubelet[2692]: E0124 02:40:49.567514 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.568758 kubelet[2692]: E0124 02:40:49.568643 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.568758 kubelet[2692]: W0124 02:40:49.568665 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.568758 kubelet[2692]: E0124 02:40:49.568683 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.571053 kubelet[2692]: E0124 02:40:49.569530 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.571053 kubelet[2692]: W0124 02:40:49.569560 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.571053 kubelet[2692]: E0124 02:40:49.569585 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.572237 kubelet[2692]: E0124 02:40:49.572127 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.572237 kubelet[2692]: W0124 02:40:49.572150 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.572237 kubelet[2692]: E0124 02:40:49.572169 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.574919 kubelet[2692]: E0124 02:40:49.573402 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.574919 kubelet[2692]: W0124 02:40:49.573425 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.574919 kubelet[2692]: E0124 02:40:49.573442 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.574919 kubelet[2692]: E0124 02:40:49.573754 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.574919 kubelet[2692]: W0124 02:40:49.573768 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.574919 kubelet[2692]: E0124 02:40:49.573783 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.574919 kubelet[2692]: E0124 02:40:49.574063 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.574919 kubelet[2692]: W0124 02:40:49.574077 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.574919 kubelet[2692]: E0124 02:40:49.574093 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.575384 kubelet[2692]: E0124 02:40:49.575047 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.575384 kubelet[2692]: W0124 02:40:49.575071 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.575384 kubelet[2692]: E0124 02:40:49.575088 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.575664 kubelet[2692]: E0124 02:40:49.575431 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.575664 kubelet[2692]: W0124 02:40:49.575450 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.575664 kubelet[2692]: E0124 02:40:49.575466 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.577411 kubelet[2692]: E0124 02:40:49.576582 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.577411 kubelet[2692]: W0124 02:40:49.576681 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.577411 kubelet[2692]: E0124 02:40:49.576701 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.577411 kubelet[2692]: E0124 02:40:49.576939 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.577411 kubelet[2692]: W0124 02:40:49.576954 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.577411 kubelet[2692]: E0124 02:40:49.576969 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.578083 kubelet[2692]: E0124 02:40:49.578015 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.578083 kubelet[2692]: W0124 02:40:49.578036 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.578083 kubelet[2692]: E0124 02:40:49.578053 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.579828 kubelet[2692]: E0124 02:40:49.579620 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.579828 kubelet[2692]: W0124 02:40:49.579636 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.579828 kubelet[2692]: E0124 02:40:49.579652 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.580390 kubelet[2692]: E0124 02:40:49.580007 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.580390 kubelet[2692]: W0124 02:40:49.580021 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.580390 kubelet[2692]: E0124 02:40:49.580037 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.580390 kubelet[2692]: I0124 02:40:49.580072 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7pg7\" (UniqueName: \"kubernetes.io/projected/9f63ab66-558d-4f53-8717-746e17757652-kube-api-access-w7pg7\") pod \"csi-node-driver-8rrnz\" (UID: \"9f63ab66-558d-4f53-8717-746e17757652\") " pod="calico-system/csi-node-driver-8rrnz" Jan 24 02:40:49.582598 kubelet[2692]: E0124 02:40:49.581525 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.582598 kubelet[2692]: W0124 02:40:49.581566 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.582598 kubelet[2692]: E0124 02:40:49.581585 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.582598 kubelet[2692]: I0124 02:40:49.581622 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9f63ab66-558d-4f53-8717-746e17757652-registration-dir\") pod \"csi-node-driver-8rrnz\" (UID: \"9f63ab66-558d-4f53-8717-746e17757652\") " pod="calico-system/csi-node-driver-8rrnz" Jan 24 02:40:49.582598 kubelet[2692]: E0124 02:40:49.581875 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.582598 kubelet[2692]: W0124 02:40:49.581890 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.582598 kubelet[2692]: E0124 02:40:49.581905 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.582598 kubelet[2692]: I0124 02:40:49.581932 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9f63ab66-558d-4f53-8717-746e17757652-socket-dir\") pod \"csi-node-driver-8rrnz\" (UID: \"9f63ab66-558d-4f53-8717-746e17757652\") " pod="calico-system/csi-node-driver-8rrnz" Jan 24 02:40:49.582598 kubelet[2692]: E0124 02:40:49.582257 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.583062 kubelet[2692]: W0124 02:40:49.582277 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.583062 kubelet[2692]: E0124 02:40:49.582295 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.587840 kubelet[2692]: E0124 02:40:49.584997 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.587840 kubelet[2692]: W0124 02:40:49.585017 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.587840 kubelet[2692]: E0124 02:40:49.585035 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.587840 kubelet[2692]: E0124 02:40:49.585890 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.587840 kubelet[2692]: W0124 02:40:49.585906 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.587840 kubelet[2692]: E0124 02:40:49.585934 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.587840 kubelet[2692]: I0124 02:40:49.586000 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9f63ab66-558d-4f53-8717-746e17757652-varrun\") pod \"csi-node-driver-8rrnz\" (UID: \"9f63ab66-558d-4f53-8717-746e17757652\") " pod="calico-system/csi-node-driver-8rrnz" Jan 24 02:40:49.591842 kubelet[2692]: E0124 02:40:49.591676 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.591842 kubelet[2692]: W0124 02:40:49.591698 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.591842 kubelet[2692]: E0124 02:40:49.591717 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.592707 kubelet[2692]: E0124 02:40:49.592095 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.592707 kubelet[2692]: W0124 02:40:49.592126 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.592707 kubelet[2692]: E0124 02:40:49.592142 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.595763 kubelet[2692]: E0124 02:40:49.595404 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.595763 kubelet[2692]: W0124 02:40:49.595424 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.595763 kubelet[2692]: E0124 02:40:49.595453 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.595763 kubelet[2692]: I0124 02:40:49.595512 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f63ab66-558d-4f53-8717-746e17757652-kubelet-dir\") pod \"csi-node-driver-8rrnz\" (UID: \"9f63ab66-558d-4f53-8717-746e17757652\") " pod="calico-system/csi-node-driver-8rrnz" Jan 24 02:40:49.596109 kubelet[2692]: E0124 02:40:49.596088 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.596358 kubelet[2692]: W0124 02:40:49.596217 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.596358 kubelet[2692]: E0124 02:40:49.596244 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.597219 kubelet[2692]: E0124 02:40:49.597199 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.597508 kubelet[2692]: W0124 02:40:49.597339 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.597508 kubelet[2692]: E0124 02:40:49.597366 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.598119 kubelet[2692]: E0124 02:40:49.597687 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.598119 kubelet[2692]: W0124 02:40:49.597711 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.598119 kubelet[2692]: E0124 02:40:49.597728 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.599851 kubelet[2692]: E0124 02:40:49.599629 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.599851 kubelet[2692]: W0124 02:40:49.599651 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.599851 kubelet[2692]: E0124 02:40:49.599673 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.601396 kubelet[2692]: E0124 02:40:49.601287 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.601396 kubelet[2692]: W0124 02:40:49.601311 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.601396 kubelet[2692]: E0124 02:40:49.601353 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.601730 kubelet[2692]: E0124 02:40:49.601685 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.601730 kubelet[2692]: W0124 02:40:49.601699 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.601730 kubelet[2692]: E0124 02:40:49.601715 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.628973 containerd[1514]: time="2026-01-24T02:40:49.625257465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:40:49.629313 containerd[1514]: time="2026-01-24T02:40:49.628908428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:40:49.629313 containerd[1514]: time="2026-01-24T02:40:49.628945672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:49.633404 containerd[1514]: time="2026-01-24T02:40:49.630913797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m65gq,Uid:ec08f916-36c4-44f2-bd69-8289c082a254,Namespace:calico-system,Attempt:0,}" Jan 24 02:40:49.633537 containerd[1514]: time="2026-01-24T02:40:49.629661318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:49.708046 kubelet[2692]: E0124 02:40:49.707655 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.708046 kubelet[2692]: W0124 02:40:49.707700 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.708046 kubelet[2692]: E0124 02:40:49.707758 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.708721 kubelet[2692]: E0124 02:40:49.708648 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.709608 kubelet[2692]: W0124 02:40:49.709433 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.709608 kubelet[2692]: E0124 02:40:49.709495 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.710001 kubelet[2692]: E0124 02:40:49.709968 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.710001 kubelet[2692]: W0124 02:40:49.709998 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.710143 kubelet[2692]: E0124 02:40:49.710017 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.711457 kubelet[2692]: E0124 02:40:49.711408 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.711457 kubelet[2692]: W0124 02:40:49.711430 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.711457 kubelet[2692]: E0124 02:40:49.711446 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.712311 kubelet[2692]: E0124 02:40:49.711822 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.712311 kubelet[2692]: W0124 02:40:49.711837 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.712311 kubelet[2692]: E0124 02:40:49.711854 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.716782 kubelet[2692]: E0124 02:40:49.714554 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.716782 kubelet[2692]: W0124 02:40:49.714587 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.716782 kubelet[2692]: E0124 02:40:49.714606 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.717140 kubelet[2692]: E0124 02:40:49.716891 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.717140 kubelet[2692]: W0124 02:40:49.716907 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.717140 kubelet[2692]: E0124 02:40:49.716937 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.718505 kubelet[2692]: E0124 02:40:49.718172 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.718505 kubelet[2692]: W0124 02:40:49.718193 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.718505 kubelet[2692]: E0124 02:40:49.718210 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.721558 kubelet[2692]: E0124 02:40:49.719804 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.721558 kubelet[2692]: W0124 02:40:49.719826 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.721558 kubelet[2692]: E0124 02:40:49.719843 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.721856 kubelet[2692]: E0124 02:40:49.721813 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.721856 kubelet[2692]: W0124 02:40:49.721835 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.721856 kubelet[2692]: E0124 02:40:49.721851 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.722302 kubelet[2692]: E0124 02:40:49.722261 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.722302 kubelet[2692]: W0124 02:40:49.722287 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.722677 kubelet[2692]: E0124 02:40:49.722304 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.726409 kubelet[2692]: E0124 02:40:49.725033 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.726409 kubelet[2692]: W0124 02:40:49.725056 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.726409 kubelet[2692]: E0124 02:40:49.725074 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.727152 containerd[1514]: time="2026-01-24T02:40:49.725467854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:40:49.727836 kubelet[2692]: E0124 02:40:49.727426 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.727836 kubelet[2692]: W0124 02:40:49.727448 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.727836 kubelet[2692]: E0124 02:40:49.727465 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.730962 containerd[1514]: time="2026-01-24T02:40:49.730376657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:40:49.730962 containerd[1514]: time="2026-01-24T02:40:49.730416163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:49.734411 kubelet[2692]: E0124 02:40:49.732441 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.734411 kubelet[2692]: W0124 02:40:49.732463 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.734411 kubelet[2692]: E0124 02:40:49.732481 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.734818 kubelet[2692]: E0124 02:40:49.734673 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.734818 kubelet[2692]: W0124 02:40:49.734689 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.734818 kubelet[2692]: E0124 02:40:49.734705 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.736549 kubelet[2692]: E0124 02:40:49.735260 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.736549 kubelet[2692]: W0124 02:40:49.735276 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.736549 kubelet[2692]: E0124 02:40:49.735292 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.738883 kubelet[2692]: E0124 02:40:49.736916 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.738883 kubelet[2692]: W0124 02:40:49.736936 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.738883 kubelet[2692]: E0124 02:40:49.736952 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.738883 kubelet[2692]: E0124 02:40:49.738741 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.738883 kubelet[2692]: W0124 02:40:49.738757 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.738883 kubelet[2692]: E0124 02:40:49.738773 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.741544 kubelet[2692]: E0124 02:40:49.740621 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.741544 kubelet[2692]: W0124 02:40:49.740642 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.741544 kubelet[2692]: E0124 02:40:49.740659 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.743643 kubelet[2692]: E0124 02:40:49.742742 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.743643 kubelet[2692]: W0124 02:40:49.742758 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.743643 kubelet[2692]: E0124 02:40:49.742775 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.747193 kubelet[2692]: E0124 02:40:49.744150 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.747193 kubelet[2692]: W0124 02:40:49.744182 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.747193 kubelet[2692]: E0124 02:40:49.744200 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.747461 kubelet[2692]: E0124 02:40:49.747240 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.747461 kubelet[2692]: W0124 02:40:49.747255 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.747461 kubelet[2692]: E0124 02:40:49.747283 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.752109 kubelet[2692]: E0124 02:40:49.751251 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.752109 kubelet[2692]: W0124 02:40:49.751274 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.752109 kubelet[2692]: E0124 02:40:49.751351 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.752109 kubelet[2692]: E0124 02:40:49.751902 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.752109 kubelet[2692]: W0124 02:40:49.751917 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.752109 kubelet[2692]: E0124 02:40:49.751933 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.752655 containerd[1514]: time="2026-01-24T02:40:49.749756575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:40:49.755388 kubelet[2692]: E0124 02:40:49.753366 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.755388 kubelet[2692]: W0124 02:40:49.753390 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.755388 kubelet[2692]: E0124 02:40:49.753408 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.773372 kubelet[2692]: E0124 02:40:49.773287 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 02:40:49.773372 kubelet[2692]: W0124 02:40:49.773342 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 02:40:49.773625 kubelet[2692]: E0124 02:40:49.773388 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 02:40:49.785580 systemd[1]: Started cri-containerd-e977c0e508bc4ed4efb3cbde89efd4394392c9782a833d380ce74bbe05cc9e0d.scope - libcontainer container e977c0e508bc4ed4efb3cbde89efd4394392c9782a833d380ce74bbe05cc9e0d. Jan 24 02:40:49.823864 systemd[1]: Started cri-containerd-c383e8067c6429bdfd70bcd28cbef3095571f76411a7ff74f9e8d78208d28268.scope - libcontainer container c383e8067c6429bdfd70bcd28cbef3095571f76411a7ff74f9e8d78208d28268. Jan 24 02:40:49.998611 containerd[1514]: time="2026-01-24T02:40:49.998060257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m65gq,Uid:ec08f916-36c4-44f2-bd69-8289c082a254,Namespace:calico-system,Attempt:0,} returns sandbox id \"c383e8067c6429bdfd70bcd28cbef3095571f76411a7ff74f9e8d78208d28268\"" Jan 24 02:40:50.004602 containerd[1514]: time="2026-01-24T02:40:50.003894660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 02:40:50.088708 containerd[1514]: time="2026-01-24T02:40:50.088623215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78bf7cf879-vxrxc,Uid:59662d8e-c640-4505-a42f-9d9d8813c961,Namespace:calico-system,Attempt:0,} returns sandbox id \"e977c0e508bc4ed4efb3cbde89efd4394392c9782a833d380ce74bbe05cc9e0d\"" Jan 24 02:40:50.558177 kubelet[2692]: E0124 02:40:50.557977 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:40:51.680347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419241088.mount: Deactivated successfully. Jan 24 02:40:51.837971 containerd[1514]: time="2026-01-24T02:40:51.836744621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:51.840358 containerd[1514]: time="2026-01-24T02:40:51.840272184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 24 02:40:51.841204 containerd[1514]: time="2026-01-24T02:40:51.841105843Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:51.846548 containerd[1514]: time="2026-01-24T02:40:51.846498145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.84236593s" Jan 24 02:40:51.846696 containerd[1514]: time="2026-01-24T02:40:51.846667074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 02:40:51.847391 containerd[1514]: time="2026-01-24T02:40:51.847310320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:51.850565 containerd[1514]: time="2026-01-24T02:40:51.850407128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 02:40:51.856388 containerd[1514]: time="2026-01-24T02:40:51.855562291Z" level=info msg="CreateContainer within sandbox \"c383e8067c6429bdfd70bcd28cbef3095571f76411a7ff74f9e8d78208d28268\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 02:40:51.934108 containerd[1514]: time="2026-01-24T02:40:51.933116955Z" level=info msg="CreateContainer within sandbox \"c383e8067c6429bdfd70bcd28cbef3095571f76411a7ff74f9e8d78208d28268\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6450e7ed3ddb494b29ff9c8826bde7a2c48bf6b1a88ec16e144281b0e0683f58\"" Jan 24 02:40:51.940138 containerd[1514]: time="2026-01-24T02:40:51.937362190Z" level=info msg="StartContainer for \"6450e7ed3ddb494b29ff9c8826bde7a2c48bf6b1a88ec16e144281b0e0683f58\"" Jan 24 02:40:52.005646 systemd[1]: Started cri-containerd-6450e7ed3ddb494b29ff9c8826bde7a2c48bf6b1a88ec16e144281b0e0683f58.scope - libcontainer container 6450e7ed3ddb494b29ff9c8826bde7a2c48bf6b1a88ec16e144281b0e0683f58. Jan 24 02:40:52.061723 containerd[1514]: time="2026-01-24T02:40:52.061654814Z" level=info msg="StartContainer for \"6450e7ed3ddb494b29ff9c8826bde7a2c48bf6b1a88ec16e144281b0e0683f58\" returns successfully" Jan 24 02:40:52.091039 systemd[1]: cri-containerd-6450e7ed3ddb494b29ff9c8826bde7a2c48bf6b1a88ec16e144281b0e0683f58.scope: Deactivated successfully. Jan 24 02:40:52.197556 containerd[1514]: time="2026-01-24T02:40:52.173735710Z" level=info msg="shim disconnected" id=6450e7ed3ddb494b29ff9c8826bde7a2c48bf6b1a88ec16e144281b0e0683f58 namespace=k8s.io Jan 24 02:40:52.197556 containerd[1514]: time="2026-01-24T02:40:52.197276500Z" level=warning msg="cleaning up after shim disconnected" id=6450e7ed3ddb494b29ff9c8826bde7a2c48bf6b1a88ec16e144281b0e0683f58 namespace=k8s.io Jan 24 02:40:52.197556 containerd[1514]: time="2026-01-24T02:40:52.197311862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 02:40:52.226914 containerd[1514]: time="2026-01-24T02:40:52.225157664Z" level=warning msg="cleanup warnings time=\"2026-01-24T02:40:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 02:40:52.557708 kubelet[2692]: E0124 02:40:52.557595 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:40:52.590023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6450e7ed3ddb494b29ff9c8826bde7a2c48bf6b1a88ec16e144281b0e0683f58-rootfs.mount: Deactivated successfully. Jan 24 02:40:54.559875 kubelet[2692]: E0124 02:40:54.559659 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:40:55.078650 containerd[1514]: time="2026-01-24T02:40:55.078559498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:55.080103 containerd[1514]: time="2026-01-24T02:40:55.080025707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 24 02:40:55.083081 containerd[1514]: time="2026-01-24T02:40:55.082538454Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:55.085611 containerd[1514]: time="2026-01-24T02:40:55.085570250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:40:55.089041 containerd[1514]: time="2026-01-24T02:40:55.088994283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.238237576s" Jan 24 02:40:55.089210 containerd[1514]: time="2026-01-24T02:40:55.089180713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 02:40:55.091786 containerd[1514]: time="2026-01-24T02:40:55.091723100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 02:40:55.122437 containerd[1514]: time="2026-01-24T02:40:55.121821668Z" level=info msg="CreateContainer within sandbox \"e977c0e508bc4ed4efb3cbde89efd4394392c9782a833d380ce74bbe05cc9e0d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 02:40:55.161579 containerd[1514]: time="2026-01-24T02:40:55.161528948Z" level=info msg="CreateContainer within sandbox \"e977c0e508bc4ed4efb3cbde89efd4394392c9782a833d380ce74bbe05cc9e0d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f06ac5f8808ddacf3eb807b7ca6a31025791a96fec1d606fc5c8262cd99c7940\"" Jan 24 02:40:55.162959 containerd[1514]: time="2026-01-24T02:40:55.162459542Z" level=info msg="StartContainer for \"f06ac5f8808ddacf3eb807b7ca6a31025791a96fec1d606fc5c8262cd99c7940\"" Jan 24 02:40:55.251628 systemd[1]: Started cri-containerd-f06ac5f8808ddacf3eb807b7ca6a31025791a96fec1d606fc5c8262cd99c7940.scope - libcontainer container f06ac5f8808ddacf3eb807b7ca6a31025791a96fec1d606fc5c8262cd99c7940. Jan 24 02:40:55.325891 containerd[1514]: time="2026-01-24T02:40:55.325688197Z" level=info msg="StartContainer for \"f06ac5f8808ddacf3eb807b7ca6a31025791a96fec1d606fc5c8262cd99c7940\" returns successfully" Jan 24 02:40:56.557494 kubelet[2692]: E0124 02:40:56.557421 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:40:56.764347 kubelet[2692]: I0124 02:40:56.757735 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78bf7cf879-vxrxc" podStartSLOduration=2.757955114 podStartE2EDuration="7.757692828s" podCreationTimestamp="2026-01-24 02:40:49 +0000 UTC" firstStartedPulling="2026-01-24 02:40:50.091245722 +0000 UTC m=+26.803715879" lastFinishedPulling="2026-01-24 02:40:55.090983427 +0000 UTC m=+31.803453593" observedRunningTime="2026-01-24 02:40:55.767001508 +0000 UTC m=+32.479471677" watchObservedRunningTime="2026-01-24 02:40:56.757692828 +0000 UTC m=+33.470163000" Jan 24 02:40:58.559314 kubelet[2692]: E0124 02:40:58.557837 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:41:00.092727 containerd[1514]: time="2026-01-24T02:41:00.092664062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:41:00.093984 containerd[1514]: time="2026-01-24T02:41:00.093940101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 02:41:00.095366 containerd[1514]: time="2026-01-24T02:41:00.094741576Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:41:00.097877 containerd[1514]: time="2026-01-24T02:41:00.097839543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:41:00.099396 containerd[1514]: time="2026-01-24T02:41:00.099359153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.007585046s" Jan 24 02:41:00.099499 containerd[1514]: time="2026-01-24T02:41:00.099400727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 02:41:00.106941 containerd[1514]: time="2026-01-24T02:41:00.106907652Z" level=info msg="CreateContainer within sandbox \"c383e8067c6429bdfd70bcd28cbef3095571f76411a7ff74f9e8d78208d28268\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 02:41:00.135622 containerd[1514]: time="2026-01-24T02:41:00.135580629Z" level=info msg="CreateContainer within sandbox \"c383e8067c6429bdfd70bcd28cbef3095571f76411a7ff74f9e8d78208d28268\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2ac1a4fd5d6b61721b4513265ab31d6fac76ec0dcf491e7746bfed79adc9cc53\"" Jan 24 02:41:00.136887 containerd[1514]: time="2026-01-24T02:41:00.136593449Z" level=info msg="StartContainer for \"2ac1a4fd5d6b61721b4513265ab31d6fac76ec0dcf491e7746bfed79adc9cc53\"" Jan 24 02:41:00.197547 systemd[1]: Started cri-containerd-2ac1a4fd5d6b61721b4513265ab31d6fac76ec0dcf491e7746bfed79adc9cc53.scope - libcontainer container 2ac1a4fd5d6b61721b4513265ab31d6fac76ec0dcf491e7746bfed79adc9cc53. Jan 24 02:41:00.259147 containerd[1514]: time="2026-01-24T02:41:00.258817884Z" level=info msg="StartContainer for \"2ac1a4fd5d6b61721b4513265ab31d6fac76ec0dcf491e7746bfed79adc9cc53\" returns successfully" Jan 24 02:41:00.558153 kubelet[2692]: E0124 02:41:00.557654 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:41:01.398613 systemd[1]: cri-containerd-2ac1a4fd5d6b61721b4513265ab31d6fac76ec0dcf491e7746bfed79adc9cc53.scope: Deactivated successfully. Jan 24 02:41:01.461725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ac1a4fd5d6b61721b4513265ab31d6fac76ec0dcf491e7746bfed79adc9cc53-rootfs.mount: Deactivated successfully. Jan 24 02:41:01.507968 kubelet[2692]: I0124 02:41:01.494143 2692 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 02:41:01.591007 containerd[1514]: time="2026-01-24T02:41:01.572617031Z" level=info msg="shim disconnected" id=2ac1a4fd5d6b61721b4513265ab31d6fac76ec0dcf491e7746bfed79adc9cc53 namespace=k8s.io Jan 24 02:41:01.592674 containerd[1514]: time="2026-01-24T02:41:01.591005196Z" level=warning msg="cleaning up after shim disconnected" id=2ac1a4fd5d6b61721b4513265ab31d6fac76ec0dcf491e7746bfed79adc9cc53 namespace=k8s.io Jan 24 02:41:01.592674 containerd[1514]: time="2026-01-24T02:41:01.591036721Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 02:41:01.628449 containerd[1514]: time="2026-01-24T02:41:01.628382635Z" level=warning msg="cleanup warnings time=\"2026-01-24T02:41:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 02:41:01.672208 systemd[1]: Created slice kubepods-besteffort-pod28182695_8b28_4eab_884e_ccb6e32ecfc7.slice - libcontainer container kubepods-besteffort-pod28182695_8b28_4eab_884e_ccb6e32ecfc7.slice. Jan 24 02:41:01.690260 systemd[1]: Created slice kubepods-burstable-poda844f830_48b6_4d22_81b9_0c77ec1069d3.slice - libcontainer container kubepods-burstable-poda844f830_48b6_4d22_81b9_0c77ec1069d3.slice. Jan 24 02:41:01.705347 systemd[1]: Created slice kubepods-besteffort-pod9e38edac_3735_49c2_8f05_a82f9686ac99.slice - libcontainer container kubepods-besteffort-pod9e38edac_3735_49c2_8f05_a82f9686ac99.slice. Jan 24 02:41:01.716358 kubelet[2692]: I0124 02:41:01.714165 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57ed0d28-f7e6-4e62-8d12-5c54e0de4159-calico-apiserver-certs\") pod \"calico-apiserver-7bcbb787c9-46gqx\" (UID: \"57ed0d28-f7e6-4e62-8d12-5c54e0de4159\") " pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" Jan 24 02:41:01.716358 kubelet[2692]: I0124 02:41:01.714220 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a844f830-48b6-4d22-81b9-0c77ec1069d3-config-volume\") pod \"coredns-674b8bbfcf-qrxcz\" (UID: \"a844f830-48b6-4d22-81b9-0c77ec1069d3\") " pod="kube-system/coredns-674b8bbfcf-qrxcz" Jan 24 02:41:01.716358 kubelet[2692]: I0124 02:41:01.714252 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr4kp\" (UniqueName: \"kubernetes.io/projected/a844f830-48b6-4d22-81b9-0c77ec1069d3-kube-api-access-sr4kp\") pod \"coredns-674b8bbfcf-qrxcz\" (UID: \"a844f830-48b6-4d22-81b9-0c77ec1069d3\") " pod="kube-system/coredns-674b8bbfcf-qrxcz" Jan 24 02:41:01.716358 kubelet[2692]: I0124 02:41:01.714283 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28182695-8b28-4eab-884e-ccb6e32ecfc7-whisker-ca-bundle\") pod \"whisker-7f8b76b7d6-25fqp\" (UID: \"28182695-8b28-4eab-884e-ccb6e32ecfc7\") " pod="calico-system/whisker-7f8b76b7d6-25fqp" Jan 24 02:41:01.716358 kubelet[2692]: I0124 02:41:01.714323 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/310e75f7-dcbf-42b8-8e1b-0553e380b8f3-goldmane-ca-bundle\") pod \"goldmane-666569f655-rksn8\" (UID: \"310e75f7-dcbf-42b8-8e1b-0553e380b8f3\") " pod="calico-system/goldmane-666569f655-rksn8" Jan 24 02:41:01.716029 systemd[1]: Created slice kubepods-besteffort-pod57ed0d28_f7e6_4e62_8d12_5c54e0de4159.slice - libcontainer container kubepods-besteffort-pod57ed0d28_f7e6_4e62_8d12_5c54e0de4159.slice. Jan 24 02:41:01.718704 kubelet[2692]: I0124 02:41:01.714374 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpww\" (UniqueName: \"kubernetes.io/projected/310e75f7-dcbf-42b8-8e1b-0553e380b8f3-kube-api-access-9kpww\") pod \"goldmane-666569f655-rksn8\" (UID: \"310e75f7-dcbf-42b8-8e1b-0553e380b8f3\") " pod="calico-system/goldmane-666569f655-rksn8" Jan 24 02:41:01.718704 kubelet[2692]: I0124 02:41:01.714405 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrp7\" (UniqueName: \"kubernetes.io/projected/28182695-8b28-4eab-884e-ccb6e32ecfc7-kube-api-access-9nrp7\") pod \"whisker-7f8b76b7d6-25fqp\" (UID: \"28182695-8b28-4eab-884e-ccb6e32ecfc7\") " pod="calico-system/whisker-7f8b76b7d6-25fqp" Jan 24 02:41:01.718704 kubelet[2692]: I0124 02:41:01.714446 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9038de97-8842-48ce-8fdf-e9b5cfec0012-tigera-ca-bundle\") pod \"calico-kube-controllers-7d5c647d49-zhs4g\" (UID: \"9038de97-8842-48ce-8fdf-e9b5cfec0012\") " pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" Jan 24 02:41:01.718704 kubelet[2692]: I0124 02:41:01.714511 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4fx\" (UniqueName: \"kubernetes.io/projected/21cb8328-f771-426d-aa02-0582dac338e9-kube-api-access-jt4fx\") pod \"coredns-674b8bbfcf-b4jj7\" (UID: \"21cb8328-f771-426d-aa02-0582dac338e9\") " pod="kube-system/coredns-674b8bbfcf-b4jj7" Jan 24 02:41:01.718704 kubelet[2692]: I0124 02:41:01.714592 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28182695-8b28-4eab-884e-ccb6e32ecfc7-whisker-backend-key-pair\") pod \"whisker-7f8b76b7d6-25fqp\" (UID: \"28182695-8b28-4eab-884e-ccb6e32ecfc7\") " pod="calico-system/whisker-7f8b76b7d6-25fqp" Jan 24 02:41:01.719008 kubelet[2692]: I0124 02:41:01.714639 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310e75f7-dcbf-42b8-8e1b-0553e380b8f3-config\") pod \"goldmane-666569f655-rksn8\" (UID: \"310e75f7-dcbf-42b8-8e1b-0553e380b8f3\") " pod="calico-system/goldmane-666569f655-rksn8" Jan 24 02:41:01.719008 kubelet[2692]: I0124 02:41:01.714668 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e38edac-3735-49c2-8f05-a82f9686ac99-calico-apiserver-certs\") pod \"calico-apiserver-7bcbb787c9-s24s2\" (UID: \"9e38edac-3735-49c2-8f05-a82f9686ac99\") " pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" Jan 24 02:41:01.719008 kubelet[2692]: I0124 02:41:01.714700 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/310e75f7-dcbf-42b8-8e1b-0553e380b8f3-goldmane-key-pair\") pod \"goldmane-666569f655-rksn8\" (UID: \"310e75f7-dcbf-42b8-8e1b-0553e380b8f3\") " pod="calico-system/goldmane-666569f655-rksn8" Jan 24 02:41:01.719008 kubelet[2692]: I0124 02:41:01.714727 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwhhb\" (UniqueName: \"kubernetes.io/projected/9e38edac-3735-49c2-8f05-a82f9686ac99-kube-api-access-nwhhb\") pod \"calico-apiserver-7bcbb787c9-s24s2\" (UID: \"9e38edac-3735-49c2-8f05-a82f9686ac99\") " pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" Jan 24 02:41:01.719008 kubelet[2692]: I0124 02:41:01.714761 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjhg\" (UniqueName: \"kubernetes.io/projected/57ed0d28-f7e6-4e62-8d12-5c54e0de4159-kube-api-access-qtjhg\") pod \"calico-apiserver-7bcbb787c9-46gqx\" (UID: \"57ed0d28-f7e6-4e62-8d12-5c54e0de4159\") " pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" Jan 24 02:41:01.719257 kubelet[2692]: I0124 02:41:01.714822 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21cb8328-f771-426d-aa02-0582dac338e9-config-volume\") pod \"coredns-674b8bbfcf-b4jj7\" (UID: \"21cb8328-f771-426d-aa02-0582dac338e9\") " pod="kube-system/coredns-674b8bbfcf-b4jj7" Jan 24 02:41:01.719257 kubelet[2692]: I0124 02:41:01.714855 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbhr\" (UniqueName: \"kubernetes.io/projected/9038de97-8842-48ce-8fdf-e9b5cfec0012-kube-api-access-2dbhr\") pod \"calico-kube-controllers-7d5c647d49-zhs4g\" (UID: \"9038de97-8842-48ce-8fdf-e9b5cfec0012\") " pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" Jan 24 02:41:01.732752 systemd[1]: Created slice kubepods-burstable-pod21cb8328_f771_426d_aa02_0582dac338e9.slice - libcontainer container kubepods-burstable-pod21cb8328_f771_426d_aa02_0582dac338e9.slice. Jan 24 02:41:01.746656 systemd[1]: Created slice kubepods-besteffort-pod310e75f7_dcbf_42b8_8e1b_0553e380b8f3.slice - libcontainer container kubepods-besteffort-pod310e75f7_dcbf_42b8_8e1b_0553e380b8f3.slice. Jan 24 02:41:01.758725 systemd[1]: Created slice kubepods-besteffort-pod9038de97_8842_48ce_8fdf_e9b5cfec0012.slice - libcontainer container kubepods-besteffort-pod9038de97_8842_48ce_8fdf_e9b5cfec0012.slice. Jan 24 02:41:01.766652 containerd[1514]: time="2026-01-24T02:41:01.766466517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 02:41:01.987786 containerd[1514]: time="2026-01-24T02:41:01.987626667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f8b76b7d6-25fqp,Uid:28182695-8b28-4eab-884e-ccb6e32ecfc7,Namespace:calico-system,Attempt:0,}" Jan 24 02:41:01.996063 containerd[1514]: time="2026-01-24T02:41:01.996011961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qrxcz,Uid:a844f830-48b6-4d22-81b9-0c77ec1069d3,Namespace:kube-system,Attempt:0,}" Jan 24 02:41:02.011810 containerd[1514]: time="2026-01-24T02:41:02.011480304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcbb787c9-s24s2,Uid:9e38edac-3735-49c2-8f05-a82f9686ac99,Namespace:calico-apiserver,Attempt:0,}" Jan 24 02:41:02.028739 containerd[1514]: time="2026-01-24T02:41:02.028687066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcbb787c9-46gqx,Uid:57ed0d28-f7e6-4e62-8d12-5c54e0de4159,Namespace:calico-apiserver,Attempt:0,}" Jan 24 02:41:02.043154 containerd[1514]: time="2026-01-24T02:41:02.042775493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4jj7,Uid:21cb8328-f771-426d-aa02-0582dac338e9,Namespace:kube-system,Attempt:0,}" Jan 24 02:41:02.087997 containerd[1514]: time="2026-01-24T02:41:02.087530746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c647d49-zhs4g,Uid:9038de97-8842-48ce-8fdf-e9b5cfec0012,Namespace:calico-system,Attempt:0,}" Jan 24 02:41:02.087997 containerd[1514]: time="2026-01-24T02:41:02.087935655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rksn8,Uid:310e75f7-dcbf-42b8-8e1b-0553e380b8f3,Namespace:calico-system,Attempt:0,}" Jan 24 02:41:02.520449 containerd[1514]: time="2026-01-24T02:41:02.520375529Z" level=error msg="Failed to destroy network for sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.526610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523-shm.mount: Deactivated successfully. Jan 24 02:41:02.537494 containerd[1514]: time="2026-01-24T02:41:02.537401256Z" level=error msg="encountered an error cleaning up failed sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.541108 containerd[1514]: time="2026-01-24T02:41:02.541053330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcbb787c9-s24s2,Uid:9e38edac-3735-49c2-8f05-a82f9686ac99,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.541404 containerd[1514]: time="2026-01-24T02:41:02.538165442Z" level=error msg="Failed to destroy network for sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.545816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef-shm.mount: Deactivated successfully. Jan 24 02:41:02.547539 containerd[1514]: time="2026-01-24T02:41:02.546369387Z" level=error msg="encountered an error cleaning up failed sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.547539 containerd[1514]: time="2026-01-24T02:41:02.546441289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c647d49-zhs4g,Uid:9038de97-8842-48ce-8fdf-e9b5cfec0012,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.558812 kubelet[2692]: E0124 02:41:02.557796 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.558812 kubelet[2692]: E0124 02:41:02.557946 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" Jan 24 02:41:02.558812 kubelet[2692]: E0124 02:41:02.557995 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" Jan 24 02:41:02.559094 kubelet[2692]: E0124 02:41:02.558097 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d5c647d49-zhs4g_calico-system(9038de97-8842-48ce-8fdf-e9b5cfec0012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d5c647d49-zhs4g_calico-system(9038de97-8842-48ce-8fdf-e9b5cfec0012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" podUID="9038de97-8842-48ce-8fdf-e9b5cfec0012" Jan 24 02:41:02.559094 kubelet[2692]: E0124 02:41:02.558615 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.559094 kubelet[2692]: E0124 02:41:02.558667 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" Jan 24 02:41:02.559309 kubelet[2692]: E0124 02:41:02.558697 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" Jan 24 02:41:02.559309 kubelet[2692]: E0124 02:41:02.558746 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bcbb787c9-s24s2_calico-apiserver(9e38edac-3735-49c2-8f05-a82f9686ac99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bcbb787c9-s24s2_calico-apiserver(9e38edac-3735-49c2-8f05-a82f9686ac99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:41:02.573823 systemd[1]: Created slice kubepods-besteffort-pod9f63ab66_558d_4f53_8717_746e17757652.slice - libcontainer container kubepods-besteffort-pod9f63ab66_558d_4f53_8717_746e17757652.slice. Jan 24 02:41:02.581848 containerd[1514]: time="2026-01-24T02:41:02.581806542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rrnz,Uid:9f63ab66-558d-4f53-8717-746e17757652,Namespace:calico-system,Attempt:0,}" Jan 24 02:41:02.588780 containerd[1514]: time="2026-01-24T02:41:02.588726711Z" level=error msg="Failed to destroy network for sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.589217 containerd[1514]: time="2026-01-24T02:41:02.589175897Z" level=error msg="encountered an error cleaning up failed sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.589296 containerd[1514]: time="2026-01-24T02:41:02.589244212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4jj7,Uid:21cb8328-f771-426d-aa02-0582dac338e9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.590495 kubelet[2692]: E0124 02:41:02.589586 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.590495 kubelet[2692]: E0124 02:41:02.589676 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b4jj7" Jan 24 02:41:02.590495 kubelet[2692]: E0124 02:41:02.589711 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b4jj7" Jan 24 02:41:02.590706 kubelet[2692]: E0124 02:41:02.589779 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-b4jj7_kube-system(21cb8328-f771-426d-aa02-0582dac338e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-b4jj7_kube-system(21cb8328-f771-426d-aa02-0582dac338e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b4jj7" podUID="21cb8328-f771-426d-aa02-0582dac338e9" Jan 24 02:41:02.594350 containerd[1514]: time="2026-01-24T02:41:02.592456570Z" level=error msg="Failed to destroy network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.594350 containerd[1514]: time="2026-01-24T02:41:02.592843177Z" level=error msg="encountered an error cleaning up failed sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.594350 containerd[1514]: time="2026-01-24T02:41:02.592893660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f8b76b7d6-25fqp,Uid:28182695-8b28-4eab-884e-ccb6e32ecfc7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.602406 containerd[1514]: time="2026-01-24T02:41:02.598385274Z" level=error msg="Failed to destroy network for sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.602406 containerd[1514]: time="2026-01-24T02:41:02.599128196Z" level=error msg="encountered an error cleaning up failed sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.602406 containerd[1514]: time="2026-01-24T02:41:02.599182479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcbb787c9-46gqx,Uid:57ed0d28-f7e6-4e62-8d12-5c54e0de4159,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.602607 kubelet[2692]: E0124 02:41:02.597423 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.602607 kubelet[2692]: E0124 02:41:02.597485 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f8b76b7d6-25fqp" Jan 24 02:41:02.602607 kubelet[2692]: E0124 02:41:02.597529 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f8b76b7d6-25fqp" Jan 24 02:41:02.595678 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257-shm.mount: Deactivated successfully. Jan 24 02:41:02.602977 kubelet[2692]: E0124 02:41:02.597603 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f8b76b7d6-25fqp_calico-system(28182695-8b28-4eab-884e-ccb6e32ecfc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f8b76b7d6-25fqp_calico-system(28182695-8b28-4eab-884e-ccb6e32ecfc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f8b76b7d6-25fqp" podUID="28182695-8b28-4eab-884e-ccb6e32ecfc7" Jan 24 02:41:02.602977 kubelet[2692]: E0124 02:41:02.599394 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.602977 kubelet[2692]: E0124 02:41:02.599504 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" Jan 24 02:41:02.603181 kubelet[2692]: E0124 02:41:02.599531 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" Jan 24 02:41:02.603181 kubelet[2692]: E0124 02:41:02.599575 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bcbb787c9-46gqx_calico-apiserver(57ed0d28-f7e6-4e62-8d12-5c54e0de4159)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bcbb787c9-46gqx_calico-apiserver(57ed0d28-f7e6-4e62-8d12-5c54e0de4159)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" podUID="57ed0d28-f7e6-4e62-8d12-5c54e0de4159" Jan 24 02:41:02.605625 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5-shm.mount: Deactivated successfully. Jan 24 02:41:02.605812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e-shm.mount: Deactivated successfully. Jan 24 02:41:02.611677 containerd[1514]: time="2026-01-24T02:41:02.611602695Z" level=error msg="Failed to destroy network for sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.613031 containerd[1514]: time="2026-01-24T02:41:02.612985985Z" level=error msg="encountered an error cleaning up failed sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.613340 containerd[1514]: time="2026-01-24T02:41:02.613202049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qrxcz,Uid:a844f830-48b6-4d22-81b9-0c77ec1069d3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.613673 kubelet[2692]: E0124 02:41:02.613624 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.613765 kubelet[2692]: E0124 02:41:02.613695 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qrxcz" Jan 24 02:41:02.613765 kubelet[2692]: E0124 02:41:02.613728 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qrxcz" Jan 24 02:41:02.613878 kubelet[2692]: E0124 02:41:02.613795 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qrxcz_kube-system(a844f830-48b6-4d22-81b9-0c77ec1069d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qrxcz_kube-system(a844f830-48b6-4d22-81b9-0c77ec1069d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qrxcz" podUID="a844f830-48b6-4d22-81b9-0c77ec1069d3" Jan 24 02:41:02.641678 containerd[1514]: time="2026-01-24T02:41:02.641161049Z" level=error msg="Failed to destroy network for sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.641678 containerd[1514]: time="2026-01-24T02:41:02.641575963Z" level=error msg="encountered an error cleaning up failed sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.642354 containerd[1514]: time="2026-01-24T02:41:02.641643949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rksn8,Uid:310e75f7-dcbf-42b8-8e1b-0553e380b8f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.642950 kubelet[2692]: E0124 02:41:02.642879 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.643141 kubelet[2692]: E0124 02:41:02.642998 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rksn8" Jan 24 02:41:02.643141 kubelet[2692]: E0124 02:41:02.643041 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rksn8" Jan 24 02:41:02.643245 kubelet[2692]: E0124 02:41:02.643122 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-rksn8_calico-system(310e75f7-dcbf-42b8-8e1b-0553e380b8f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-rksn8_calico-system(310e75f7-dcbf-42b8-8e1b-0553e380b8f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:41:02.709036 containerd[1514]: time="2026-01-24T02:41:02.708973277Z" level=error msg="Failed to destroy network for sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.709874 containerd[1514]: time="2026-01-24T02:41:02.709681891Z" level=error msg="encountered an error cleaning up failed sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.709874 containerd[1514]: time="2026-01-24T02:41:02.709747213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rrnz,Uid:9f63ab66-558d-4f53-8717-746e17757652,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.710188 kubelet[2692]: E0124 02:41:02.710108 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.710277 kubelet[2692]: E0124 02:41:02.710211 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8rrnz" Jan 24 02:41:02.710277 kubelet[2692]: E0124 02:41:02.710264 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8rrnz" Jan 24 02:41:02.710463 kubelet[2692]: E0124 02:41:02.710374 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8rrnz_calico-system(9f63ab66-558d-4f53-8717-746e17757652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8rrnz_calico-system(9f63ab66-558d-4f53-8717-746e17757652)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:41:02.769296 kubelet[2692]: I0124 02:41:02.769157 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:02.772870 kubelet[2692]: I0124 02:41:02.772748 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:02.779886 kubelet[2692]: I0124 02:41:02.779432 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:02.783749 kubelet[2692]: I0124 02:41:02.783077 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:02.788434 containerd[1514]: time="2026-01-24T02:41:02.788294233Z" level=info msg="StopPodSandbox for \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\"" Jan 24 02:41:02.790614 containerd[1514]: time="2026-01-24T02:41:02.789148148Z" level=info msg="StopPodSandbox for \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\"" Jan 24 02:41:02.792898 containerd[1514]: time="2026-01-24T02:41:02.791869055Z" level=info msg="Ensure that sandbox 3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9 in task-service has been cleanup successfully" Jan 24 02:41:02.793294 containerd[1514]: time="2026-01-24T02:41:02.793225109Z" level=info msg="StopPodSandbox for \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\"" Jan 24 02:41:02.793907 containerd[1514]: time="2026-01-24T02:41:02.793837018Z" level=info msg="Ensure that sandbox 82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257 in task-service has been cleanup successfully" Jan 24 02:41:02.795483 containerd[1514]: time="2026-01-24T02:41:02.795445883Z" level=info msg="StopPodSandbox for \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\"" Jan 24 02:41:02.795828 containerd[1514]: time="2026-01-24T02:41:02.795794318Z" level=info msg="Ensure that sandbox 6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e in task-service has been cleanup successfully" Jan 24 02:41:02.805603 containerd[1514]: time="2026-01-24T02:41:02.805470806Z" level=info msg="Ensure that sandbox 77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363 in task-service has been cleanup successfully" Jan 24 02:41:02.808763 kubelet[2692]: I0124 02:41:02.808730 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:02.810278 kubelet[2692]: I0124 02:41:02.810251 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:02.811852 containerd[1514]: time="2026-01-24T02:41:02.811770466Z" level=info msg="StopPodSandbox for \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\"" Jan 24 02:41:02.812631 containerd[1514]: time="2026-01-24T02:41:02.812496634Z" level=info msg="StopPodSandbox for \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\"" Jan 24 02:41:02.814343 containerd[1514]: time="2026-01-24T02:41:02.814066940Z" level=info msg="Ensure that sandbox b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3 in task-service has been cleanup successfully" Jan 24 02:41:02.814921 containerd[1514]: time="2026-01-24T02:41:02.814834196Z" level=info msg="Ensure that sandbox 89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5 in task-service has been cleanup successfully" Jan 24 02:41:02.820886 kubelet[2692]: I0124 02:41:02.820847 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:02.824069 containerd[1514]: time="2026-01-24T02:41:02.823376622Z" level=info msg="StopPodSandbox for \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\"" Jan 24 02:41:02.827349 containerd[1514]: time="2026-01-24T02:41:02.827138003Z" level=info msg="Ensure that sandbox 95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523 in task-service has been cleanup successfully" Jan 24 02:41:02.864909 kubelet[2692]: I0124 02:41:02.863962 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:02.874807 containerd[1514]: time="2026-01-24T02:41:02.874754240Z" level=info msg="StopPodSandbox for \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\"" Jan 24 02:41:02.875239 containerd[1514]: time="2026-01-24T02:41:02.875203247Z" level=info msg="Ensure that sandbox a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef in task-service has been cleanup successfully" Jan 24 02:41:02.978438 containerd[1514]: time="2026-01-24T02:41:02.978369758Z" level=error msg="StopPodSandbox for \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\" failed" error="failed to destroy network for sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:02.979344 kubelet[2692]: E0124 02:41:02.979161 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:02.986967 kubelet[2692]: E0124 02:41:02.986509 2692 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5"} Jan 24 02:41:02.986967 kubelet[2692]: E0124 02:41:02.986681 2692 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57ed0d28-f7e6-4e62-8d12-5c54e0de4159\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 02:41:02.986967 kubelet[2692]: E0124 02:41:02.986743 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57ed0d28-f7e6-4e62-8d12-5c54e0de4159\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" podUID="57ed0d28-f7e6-4e62-8d12-5c54e0de4159" Jan 24 02:41:03.000422 containerd[1514]: time="2026-01-24T02:41:03.000338720Z" level=error msg="StopPodSandbox for \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\" failed" error="failed to destroy network for sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:03.000969 kubelet[2692]: E0124 02:41:03.000651 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:03.001173 kubelet[2692]: E0124 02:41:03.000740 2692 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523"} Jan 24 02:41:03.001644 kubelet[2692]: E0124 02:41:03.001235 2692 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e38edac-3735-49c2-8f05-a82f9686ac99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 02:41:03.001644 kubelet[2692]: E0124 02:41:03.001290 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e38edac-3735-49c2-8f05-a82f9686ac99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:41:03.013121 containerd[1514]: time="2026-01-24T02:41:03.012485406Z" level=error msg="StopPodSandbox for \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\" failed" error="failed to destroy network for sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:03.013310 kubelet[2692]: E0124 02:41:03.012817 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:03.013310 kubelet[2692]: E0124 02:41:03.012928 2692 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257"} Jan 24 02:41:03.013310 kubelet[2692]: E0124 02:41:03.012980 2692 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21cb8328-f771-426d-aa02-0582dac338e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 02:41:03.013310 kubelet[2692]: E0124 02:41:03.013014 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21cb8328-f771-426d-aa02-0582dac338e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b4jj7" podUID="21cb8328-f771-426d-aa02-0582dac338e9" Jan 24 02:41:03.021042 containerd[1514]: time="2026-01-24T02:41:03.020867701Z" level=error msg="StopPodSandbox for \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\" failed" error="failed to destroy network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:03.021705 kubelet[2692]: E0124 02:41:03.021478 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:03.021705 kubelet[2692]: E0124 02:41:03.021552 2692 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e"} Jan 24 02:41:03.021705 kubelet[2692]: E0124 02:41:03.021606 2692 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28182695-8b28-4eab-884e-ccb6e32ecfc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 02:41:03.021705 kubelet[2692]: E0124 02:41:03.021643 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28182695-8b28-4eab-884e-ccb6e32ecfc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f8b76b7d6-25fqp" podUID="28182695-8b28-4eab-884e-ccb6e32ecfc7" Jan 24 02:41:03.034232 containerd[1514]: time="2026-01-24T02:41:03.032937122Z" level=error msg="StopPodSandbox for \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\" failed" error="failed to destroy network for sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:03.034944 kubelet[2692]: E0124 02:41:03.033290 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:03.034944 kubelet[2692]: E0124 02:41:03.034726 2692 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3"} Jan 24 02:41:03.034944 kubelet[2692]: E0124 02:41:03.034860 2692 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"310e75f7-dcbf-42b8-8e1b-0553e380b8f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 02:41:03.034944 kubelet[2692]: E0124 02:41:03.034895 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"310e75f7-dcbf-42b8-8e1b-0553e380b8f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:41:03.040132 containerd[1514]: time="2026-01-24T02:41:03.040076378Z" level=error msg="StopPodSandbox for \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\" failed" error="failed to destroy network for sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:03.040606 kubelet[2692]: E0124 02:41:03.040557 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:03.040941 kubelet[2692]: E0124 02:41:03.040792 2692 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9"} Jan 24 02:41:03.040941 kubelet[2692]: E0124 02:41:03.040856 2692 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f63ab66-558d-4f53-8717-746e17757652\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 02:41:03.040941 kubelet[2692]: E0124 02:41:03.040899 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f63ab66-558d-4f53-8717-746e17757652\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:41:03.042589 containerd[1514]: time="2026-01-24T02:41:03.042535967Z" level=error msg="StopPodSandbox for \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\" failed" error="failed to destroy network for sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:03.042778 kubelet[2692]: E0124 02:41:03.042730 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:03.042866 kubelet[2692]: E0124 02:41:03.042781 2692 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363"} Jan 24 02:41:03.042866 kubelet[2692]: E0124 02:41:03.042819 2692 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a844f830-48b6-4d22-81b9-0c77ec1069d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 02:41:03.042988 kubelet[2692]: E0124 02:41:03.042857 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a844f830-48b6-4d22-81b9-0c77ec1069d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qrxcz" podUID="a844f830-48b6-4d22-81b9-0c77ec1069d3" Jan 24 02:41:03.043146 containerd[1514]: time="2026-01-24T02:41:03.043081620Z" level=error msg="StopPodSandbox for \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\" failed" error="failed to destroy network for sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 02:41:03.043498 kubelet[2692]: E0124 02:41:03.043344 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:03.043498 kubelet[2692]: E0124 02:41:03.043393 2692 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef"} Jan 24 02:41:03.043498 kubelet[2692]: E0124 02:41:03.043427 2692 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9038de97-8842-48ce-8fdf-e9b5cfec0012\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 02:41:03.043498 kubelet[2692]: E0124 02:41:03.043455 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9038de97-8842-48ce-8fdf-e9b5cfec0012\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" podUID="9038de97-8842-48ce-8fdf-e9b5cfec0012" Jan 24 02:41:03.464311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3-shm.mount: Deactivated successfully. Jan 24 02:41:03.464474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363-shm.mount: Deactivated successfully. Jan 24 02:41:12.571979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1327420915.mount: Deactivated successfully. Jan 24 02:41:12.695873 containerd[1514]: time="2026-01-24T02:41:12.667127033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 02:41:12.696693 containerd[1514]: time="2026-01-24T02:41:12.696419977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:41:12.710038 containerd[1514]: time="2026-01-24T02:41:12.709489680Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:41:12.710038 containerd[1514]: time="2026-01-24T02:41:12.709539931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.942867812s" Jan 24 02:41:12.710038 containerd[1514]: time="2026-01-24T02:41:12.709583292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 02:41:12.710999 containerd[1514]: time="2026-01-24T02:41:12.710582183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:41:12.765375 containerd[1514]: time="2026-01-24T02:41:12.765300114Z" level=info msg="CreateContainer within sandbox \"c383e8067c6429bdfd70bcd28cbef3095571f76411a7ff74f9e8d78208d28268\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 02:41:12.814413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921530595.mount: Deactivated successfully. Jan 24 02:41:12.826276 containerd[1514]: time="2026-01-24T02:41:12.825843769Z" level=info msg="CreateContainer within sandbox \"c383e8067c6429bdfd70bcd28cbef3095571f76411a7ff74f9e8d78208d28268\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"146cd365c25a3b91898e97f48721481f2b2544d660e922a4981d0559937d7373\"" Jan 24 02:41:12.834513 containerd[1514]: time="2026-01-24T02:41:12.834356467Z" level=info msg="StartContainer for \"146cd365c25a3b91898e97f48721481f2b2544d660e922a4981d0559937d7373\"" Jan 24 02:41:13.073271 systemd[1]: Started cri-containerd-146cd365c25a3b91898e97f48721481f2b2544d660e922a4981d0559937d7373.scope - libcontainer container 146cd365c25a3b91898e97f48721481f2b2544d660e922a4981d0559937d7373. Jan 24 02:41:13.148654 containerd[1514]: time="2026-01-24T02:41:13.148520119Z" level=info msg="StartContainer for \"146cd365c25a3b91898e97f48721481f2b2544d660e922a4981d0559937d7373\" returns successfully" Jan 24 02:41:13.405111 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 02:41:13.407424 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 02:41:13.587073 containerd[1514]: time="2026-01-24T02:41:13.587013726Z" level=info msg="StopPodSandbox for \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\"" Jan 24 02:41:13.587503 containerd[1514]: time="2026-01-24T02:41:13.587346839Z" level=info msg="StopPodSandbox for \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\"" Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:13.777 [INFO][3883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:13.778 [INFO][3883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" iface="eth0" netns="/var/run/netns/cni-6cc3d109-af0b-c51e-2fbf-16bfcc4a9b4b" Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:13.778 [INFO][3883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" iface="eth0" netns="/var/run/netns/cni-6cc3d109-af0b-c51e-2fbf-16bfcc4a9b4b" Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:13.780 [INFO][3883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" iface="eth0" netns="/var/run/netns/cni-6cc3d109-af0b-c51e-2fbf-16bfcc4a9b4b" Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:13.780 [INFO][3883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:13.781 [INFO][3883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:14.111 [INFO][3900] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:14.114 [INFO][3900] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:14.115 [INFO][3900] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:14.151 [WARNING][3900] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:14.152 [INFO][3900] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:14.154 [INFO][3900] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:14.166878 containerd[1514]: 2026-01-24 02:41:14.161 [INFO][3883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:14.178572 systemd[1]: run-netns-cni\x2d6cc3d109\x2daf0b\x2dc51e\x2d2fbf\x2d16bfcc4a9b4b.mount: Deactivated successfully. Jan 24 02:41:14.182379 containerd[1514]: time="2026-01-24T02:41:14.181509435Z" level=info msg="TearDown network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\" successfully" Jan 24 02:41:14.182379 containerd[1514]: time="2026-01-24T02:41:14.181792138Z" level=info msg="StopPodSandbox for \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\" returns successfully" Jan 24 02:41:14.187995 containerd[1514]: time="2026-01-24T02:41:14.187832211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f8b76b7d6-25fqp,Uid:28182695-8b28-4eab-884e-ccb6e32ecfc7,Namespace:calico-system,Attempt:1,}" Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:13.779 [INFO][3882] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:13.781 [INFO][3882] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" iface="eth0" netns="/var/run/netns/cni-b2f91105-3275-86a6-a25b-ac4337c21f30" Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:13.781 [INFO][3882] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" iface="eth0" netns="/var/run/netns/cni-b2f91105-3275-86a6-a25b-ac4337c21f30" Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:13.781 [INFO][3882] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" iface="eth0" netns="/var/run/netns/cni-b2f91105-3275-86a6-a25b-ac4337c21f30" Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:13.782 [INFO][3882] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:13.782 [INFO][3882] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:14.109 [INFO][3902] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" HandleID="k8s-pod-network.95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:14.113 [INFO][3902] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:14.154 [INFO][3902] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:14.195 [WARNING][3902] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" HandleID="k8s-pod-network.95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:14.195 [INFO][3902] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" HandleID="k8s-pod-network.95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:14.201 [INFO][3902] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:14.259604 containerd[1514]: 2026-01-24 02:41:14.215 [INFO][3882] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:14.262257 containerd[1514]: time="2026-01-24T02:41:14.259814057Z" level=info msg="TearDown network for sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\" successfully" Jan 24 02:41:14.262257 containerd[1514]: time="2026-01-24T02:41:14.259846489Z" level=info msg="StopPodSandbox for \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\" returns successfully" Jan 24 02:41:14.271793 systemd[1]: run-netns-cni\x2db2f91105\x2d3275\x2d86a6\x2da25b\x2dac4337c21f30.mount: Deactivated successfully. Jan 24 02:41:14.284964 containerd[1514]: time="2026-01-24T02:41:14.284478929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcbb787c9-s24s2,Uid:9e38edac-3735-49c2-8f05-a82f9686ac99,Namespace:calico-apiserver,Attempt:1,}" Jan 24 02:41:14.683501 systemd-networkd[1419]: cali5822c98336f: Link UP Jan 24 02:41:14.687099 systemd-networkd[1419]: cali5822c98336f: Gained carrier Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.375 [INFO][3919] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.418 [INFO][3919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0 whisker-7f8b76b7d6- calico-system 28182695-8b28-4eab-884e-ccb6e32ecfc7 893 0 2026-01-24 02:40:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f8b76b7d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-aqhf7.gb1.brightbox.com whisker-7f8b76b7d6-25fqp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5822c98336f [] [] }} ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Namespace="calico-system" Pod="whisker-7f8b76b7d6-25fqp" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.418 [INFO][3919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Namespace="calico-system" Pod="whisker-7f8b76b7d6-25fqp" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.530 [INFO][3954] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.531 [INFO][3954] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000302430), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-aqhf7.gb1.brightbox.com", "pod":"whisker-7f8b76b7d6-25fqp", "timestamp":"2026-01-24 02:41:14.530228725 +0000 UTC"}, Hostname:"srv-aqhf7.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.531 [INFO][3954] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.532 [INFO][3954] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.532 [INFO][3954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aqhf7.gb1.brightbox.com' Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.546 [INFO][3954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.563 [INFO][3954] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.572 [INFO][3954] ipam/ipam.go 511: Trying affinity for 192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.579 [INFO][3954] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.589 [INFO][3954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.589 [INFO][3954] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.592 [INFO][3954] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.601 [INFO][3954] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.619 [INFO][3954] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.1/26] block=192.168.5.0/26 handle="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.620 [INFO][3954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.1/26] handle="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.620 [INFO][3954] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:14.711179 containerd[1514]: 2026-01-24 02:41:14.620 [INFO][3954] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.1/26] IPv6=[] ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:14.717311 containerd[1514]: 2026-01-24 02:41:14.623 [INFO][3919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Namespace="calico-system" Pod="whisker-7f8b76b7d6-25fqp" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0", GenerateName:"whisker-7f8b76b7d6-", Namespace:"calico-system", SelfLink:"", UID:"28182695-8b28-4eab-884e-ccb6e32ecfc7", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f8b76b7d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"", Pod:"whisker-7f8b76b7d6-25fqp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5822c98336f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:14.717311 containerd[1514]: 2026-01-24 02:41:14.623 [INFO][3919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.1/32] ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Namespace="calico-system" Pod="whisker-7f8b76b7d6-25fqp" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:14.717311 containerd[1514]: 2026-01-24 02:41:14.623 [INFO][3919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5822c98336f ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Namespace="calico-system" Pod="whisker-7f8b76b7d6-25fqp" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:14.717311 containerd[1514]: 2026-01-24 02:41:14.677 [INFO][3919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Namespace="calico-system" Pod="whisker-7f8b76b7d6-25fqp" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:14.717311 containerd[1514]: 2026-01-24 02:41:14.683 [INFO][3919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Namespace="calico-system" Pod="whisker-7f8b76b7d6-25fqp" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0", GenerateName:"whisker-7f8b76b7d6-", Namespace:"calico-system", SelfLink:"", UID:"28182695-8b28-4eab-884e-ccb6e32ecfc7", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f8b76b7d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d", Pod:"whisker-7f8b76b7d6-25fqp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5822c98336f", MAC:"2e:55:2e:2f:a7:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:14.717311 containerd[1514]: 2026-01-24 02:41:14.706 [INFO][3919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Namespace="calico-system" Pod="whisker-7f8b76b7d6-25fqp" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:14.752229 kubelet[2692]: I0124 02:41:14.743347 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m65gq" podStartSLOduration=3.001830429 podStartE2EDuration="25.709456997s" podCreationTimestamp="2026-01-24 02:40:49 +0000 UTC" firstStartedPulling="2026-01-24 02:40:50.003139979 +0000 UTC m=+26.715610135" lastFinishedPulling="2026-01-24 02:41:12.710766544 +0000 UTC m=+49.423236703" observedRunningTime="2026-01-24 02:41:14.14203916 +0000 UTC m=+50.854509343" watchObservedRunningTime="2026-01-24 02:41:14.709456997 +0000 UTC m=+51.421927164" Jan 24 02:41:14.786366 containerd[1514]: time="2026-01-24T02:41:14.785682037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:41:14.786366 containerd[1514]: time="2026-01-24T02:41:14.785878309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:41:14.786366 containerd[1514]: time="2026-01-24T02:41:14.785967358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:14.792719 containerd[1514]: time="2026-01-24T02:41:14.792611727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:14.822492 systemd-networkd[1419]: cali23bd91aa908: Link UP Jan 24 02:41:14.824406 systemd-networkd[1419]: cali23bd91aa908: Gained carrier Jan 24 02:41:14.887803 systemd[1]: Started cri-containerd-6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d.scope - libcontainer container 6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d. Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.407 [INFO][3931] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.431 [INFO][3931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0 calico-apiserver-7bcbb787c9- calico-apiserver 9e38edac-3735-49c2-8f05-a82f9686ac99 892 0 2026-01-24 02:40:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bcbb787c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-aqhf7.gb1.brightbox.com calico-apiserver-7bcbb787c9-s24s2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali23bd91aa908 [] [] }} ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-s24s2" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.431 [INFO][3931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-s24s2" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.546 [INFO][3958] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" HandleID="k8s-pod-network.24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.547 [INFO][3958] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" HandleID="k8s-pod-network.24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000283b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-aqhf7.gb1.brightbox.com", "pod":"calico-apiserver-7bcbb787c9-s24s2", "timestamp":"2026-01-24 02:41:14.546840928 +0000 UTC"}, Hostname:"srv-aqhf7.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.548 [INFO][3958] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.620 [INFO][3958] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.620 [INFO][3958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aqhf7.gb1.brightbox.com' Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.653 [INFO][3958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.684 [INFO][3958] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.714 [INFO][3958] ipam/ipam.go 511: Trying affinity for 192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.727 [INFO][3958] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.749 [INFO][3958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.750 [INFO][3958] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.756 [INFO][3958] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.766 [INFO][3958] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.800 [INFO][3958] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.2/26] block=192.168.5.0/26 handle="k8s-pod-network.24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.801 [INFO][3958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.2/26] handle="k8s-pod-network.24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.801 [INFO][3958] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:14.905451 containerd[1514]: 2026-01-24 02:41:14.801 [INFO][3958] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.2/26] IPv6=[] ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" HandleID="k8s-pod-network.24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:14.908450 containerd[1514]: 2026-01-24 02:41:14.809 [INFO][3931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-s24s2" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0", GenerateName:"calico-apiserver-7bcbb787c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e38edac-3735-49c2-8f05-a82f9686ac99", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcbb787c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7bcbb787c9-s24s2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23bd91aa908", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:14.908450 containerd[1514]: 2026-01-24 02:41:14.809 [INFO][3931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.2/32] ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-s24s2" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:14.908450 containerd[1514]: 2026-01-24 02:41:14.809 [INFO][3931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23bd91aa908 ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-s24s2" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:14.908450 containerd[1514]: 2026-01-24 02:41:14.820 [INFO][3931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-s24s2" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:14.908450 containerd[1514]: 2026-01-24 02:41:14.837 [INFO][3931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-s24s2" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0", GenerateName:"calico-apiserver-7bcbb787c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e38edac-3735-49c2-8f05-a82f9686ac99", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcbb787c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf", Pod:"calico-apiserver-7bcbb787c9-s24s2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23bd91aa908", MAC:"62:79:cd:8c:47:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:14.908450 containerd[1514]: 2026-01-24 02:41:14.900 [INFO][3931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-s24s2" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:14.954091 containerd[1514]: time="2026-01-24T02:41:14.950090082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:41:14.954091 containerd[1514]: time="2026-01-24T02:41:14.950192430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:41:14.954091 containerd[1514]: time="2026-01-24T02:41:14.950217787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:14.954091 containerd[1514]: time="2026-01-24T02:41:14.950361161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:15.025551 systemd[1]: Started cri-containerd-24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf.scope - libcontainer container 24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf. Jan 24 02:41:15.145478 containerd[1514]: time="2026-01-24T02:41:15.145418788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f8b76b7d6-25fqp,Uid:28182695-8b28-4eab-884e-ccb6e32ecfc7,Namespace:calico-system,Attempt:1,} returns sandbox id \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\"" Jan 24 02:41:15.151777 containerd[1514]: time="2026-01-24T02:41:15.151594027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 02:41:15.166165 containerd[1514]: time="2026-01-24T02:41:15.166015975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcbb787c9-s24s2,Uid:9e38edac-3735-49c2-8f05-a82f9686ac99,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf\"" Jan 24 02:41:15.496989 containerd[1514]: time="2026-01-24T02:41:15.496925866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:15.507095 containerd[1514]: time="2026-01-24T02:41:15.498732080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 02:41:15.507345 containerd[1514]: time="2026-01-24T02:41:15.498744503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 02:41:15.507683 kubelet[2692]: E0124 02:41:15.507498 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 02:41:15.511249 kubelet[2692]: E0124 02:41:15.510927 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 02:41:15.511595 containerd[1514]: time="2026-01-24T02:41:15.511559317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 02:41:15.525454 kubelet[2692]: E0124 02:41:15.525010 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2c29136013d84d03b2adb5ac5c06e984,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9nrp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f8b76b7d6-25fqp_calico-system(28182695-8b28-4eab-884e-ccb6e32ecfc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:15.564969 containerd[1514]: time="2026-01-24T02:41:15.561875135Z" level=info msg="StopPodSandbox for \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\"" Jan 24 02:41:15.566408 containerd[1514]: time="2026-01-24T02:41:15.565999319Z" level=info msg="StopPodSandbox for \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\"" Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.699 [INFO][4114] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.701 [INFO][4114] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" iface="eth0" netns="/var/run/netns/cni-0b1e372e-01c8-45ae-1f5e-3292d52a0fde" Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.702 [INFO][4114] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" iface="eth0" netns="/var/run/netns/cni-0b1e372e-01c8-45ae-1f5e-3292d52a0fde" Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.704 [INFO][4114] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" iface="eth0" netns="/var/run/netns/cni-0b1e372e-01c8-45ae-1f5e-3292d52a0fde" Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.704 [INFO][4114] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.704 [INFO][4114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.763 [INFO][4133] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" HandleID="k8s-pod-network.b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.764 [INFO][4133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.764 [INFO][4133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.777 [WARNING][4133] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" HandleID="k8s-pod-network.b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.777 [INFO][4133] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" HandleID="k8s-pod-network.b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.779 [INFO][4133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:15.788969 containerd[1514]: 2026-01-24 02:41:15.783 [INFO][4114] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:15.790586 containerd[1514]: time="2026-01-24T02:41:15.790194766Z" level=info msg="TearDown network for sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\" successfully" Jan 24 02:41:15.790586 containerd[1514]: time="2026-01-24T02:41:15.790250531Z" level=info msg="StopPodSandbox for \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\" returns successfully" Jan 24 02:41:15.794436 containerd[1514]: time="2026-01-24T02:41:15.794397936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rksn8,Uid:310e75f7-dcbf-42b8-8e1b-0553e380b8f3,Namespace:calico-system,Attempt:1,}" Jan 24 02:41:15.801071 systemd[1]: run-netns-cni\x2d0b1e372e\x2d01c8\x2d45ae\x2d1f5e\x2d3292d52a0fde.mount: Deactivated successfully. Jan 24 02:41:15.825573 containerd[1514]: time="2026-01-24T02:41:15.825518207Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:15.839369 containerd[1514]: time="2026-01-24T02:41:15.837238053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 02:41:15.839369 containerd[1514]: time="2026-01-24T02:41:15.837421477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.747 [INFO][4123] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.748 [INFO][4123] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" iface="eth0" netns="/var/run/netns/cni-d6684ce6-80cf-12dc-9a54-f5457005d082" Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.748 [INFO][4123] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" iface="eth0" netns="/var/run/netns/cni-d6684ce6-80cf-12dc-9a54-f5457005d082" Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.748 [INFO][4123] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" iface="eth0" netns="/var/run/netns/cni-d6684ce6-80cf-12dc-9a54-f5457005d082" Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.748 [INFO][4123] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.748 [INFO][4123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.809 [INFO][4139] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" HandleID="k8s-pod-network.82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.814 [INFO][4139] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.814 [INFO][4139] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.827 [WARNING][4139] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" HandleID="k8s-pod-network.82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.827 [INFO][4139] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" HandleID="k8s-pod-network.82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.830 [INFO][4139] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:15.846852 containerd[1514]: 2026-01-24 02:41:15.839 [INFO][4123] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:15.849526 containerd[1514]: time="2026-01-24T02:41:15.849463247Z" level=info msg="TearDown network for sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\" successfully" Jan 24 02:41:15.849729 containerd[1514]: time="2026-01-24T02:41:15.849697323Z" level=info msg="StopPodSandbox for \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\" returns successfully" Jan 24 02:41:15.854773 kubelet[2692]: E0124 02:41:15.851667 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:15.854773 kubelet[2692]: E0124 02:41:15.851739 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:15.854095 systemd[1]: run-netns-cni\x2dd6684ce6\x2d80cf\x2d12dc\x2d9a54\x2df5457005d082.mount: Deactivated successfully. Jan 24 02:41:15.856706 containerd[1514]: time="2026-01-24T02:41:15.855008665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4jj7,Uid:21cb8328-f771-426d-aa02-0582dac338e9,Namespace:kube-system,Attempt:1,}" Jan 24 02:41:15.861001 kubelet[2692]: E0124 02:41:15.860008 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwhhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bcbb787c9-s24s2_calico-apiserver(9e38edac-3735-49c2-8f05-a82f9686ac99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:15.867619 kubelet[2692]: E0124 02:41:15.866426 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:41:15.912914 containerd[1514]: time="2026-01-24T02:41:15.912828987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 02:41:16.125419 kubelet[2692]: E0124 02:41:16.125359 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:41:16.155511 systemd-networkd[1419]: cali36a0a5da7d2: Link UP Jan 24 02:41:16.158762 systemd-networkd[1419]: cali36a0a5da7d2: Gained carrier Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:15.954 [INFO][4157] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:15.981 [INFO][4157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0 coredns-674b8bbfcf- kube-system 21cb8328-f771-426d-aa02-0582dac338e9 924 0 2026-01-24 02:40:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-aqhf7.gb1.brightbox.com coredns-674b8bbfcf-b4jj7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali36a0a5da7d2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4jj7" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:15.982 [INFO][4157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4jj7" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.044 [INFO][4181] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" HandleID="k8s-pod-network.d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.045 [INFO][4181] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" HandleID="k8s-pod-network.d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5960), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-aqhf7.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-b4jj7", "timestamp":"2026-01-24 02:41:16.044697115 +0000 UTC"}, Hostname:"srv-aqhf7.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.045 [INFO][4181] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.045 [INFO][4181] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.045 [INFO][4181] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aqhf7.gb1.brightbox.com' Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.061 [INFO][4181] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.066 [INFO][4181] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.079 [INFO][4181] ipam/ipam.go 511: Trying affinity for 192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.081 [INFO][4181] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.087 [INFO][4181] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.087 [INFO][4181] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.089 [INFO][4181] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953 Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.096 [INFO][4181] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.115 [INFO][4181] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.3/26] block=192.168.5.0/26 handle="k8s-pod-network.d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.115 [INFO][4181] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.3/26] handle="k8s-pod-network.d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.115 [INFO][4181] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:16.209442 containerd[1514]: 2026-01-24 02:41:16.117 [INFO][4181] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.3/26] IPv6=[] ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" HandleID="k8s-pod-network.d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:16.210864 containerd[1514]: 2026-01-24 02:41:16.132 [INFO][4157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4jj7" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"21cb8328-f771-426d-aa02-0582dac338e9", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-b4jj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36a0a5da7d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:16.210864 containerd[1514]: 2026-01-24 02:41:16.134 [INFO][4157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.3/32] ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4jj7" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:16.210864 containerd[1514]: 2026-01-24 02:41:16.134 [INFO][4157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36a0a5da7d2 ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4jj7" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:16.210864 containerd[1514]: 2026-01-24 02:41:16.163 [INFO][4157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4jj7" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:16.210864 containerd[1514]: 2026-01-24 02:41:16.176 [INFO][4157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4jj7" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"21cb8328-f771-426d-aa02-0582dac338e9", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953", Pod:"coredns-674b8bbfcf-b4jj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36a0a5da7d2", MAC:"36:94:9f:9f:33:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:16.210864 containerd[1514]: 2026-01-24 02:41:16.201 [INFO][4157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4jj7" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:16.240345 containerd[1514]: time="2026-01-24T02:41:16.238608398Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:16.241173 containerd[1514]: time="2026-01-24T02:41:16.241118318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 02:41:16.241268 containerd[1514]: time="2026-01-24T02:41:16.241223306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 02:41:16.242833 kubelet[2692]: E0124 02:41:16.242097 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 02:41:16.242833 kubelet[2692]: E0124 02:41:16.242181 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 02:41:16.242833 kubelet[2692]: E0124 02:41:16.242446 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nrp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f8b76b7d6-25fqp_calico-system(28182695-8b28-4eab-884e-ccb6e32ecfc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:16.244068 kubelet[2692]: E0124 02:41:16.244019 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f8b76b7d6-25fqp" podUID="28182695-8b28-4eab-884e-ccb6e32ecfc7" Jan 24 02:41:16.269449 systemd-networkd[1419]: cali724997fc00b: Link UP Jan 24 02:41:16.273869 systemd-networkd[1419]: cali724997fc00b: Gained carrier Jan 24 02:41:16.307457 systemd-networkd[1419]: cali5822c98336f: Gained IPv6LL Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:15.936 [INFO][4148] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:15.971 [INFO][4148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0 goldmane-666569f655- calico-system 310e75f7-dcbf-42b8-8e1b-0553e380b8f3 923 0 2026-01-24 02:40:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-aqhf7.gb1.brightbox.com goldmane-666569f655-rksn8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali724997fc00b [] [] }} ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Namespace="calico-system" Pod="goldmane-666569f655-rksn8" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:15.971 [INFO][4148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Namespace="calico-system" Pod="goldmane-666569f655-rksn8" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.086 [INFO][4179] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" HandleID="k8s-pod-network.8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.086 [INFO][4179] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" HandleID="k8s-pod-network.8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-aqhf7.gb1.brightbox.com", "pod":"goldmane-666569f655-rksn8", "timestamp":"2026-01-24 02:41:16.086412138 +0000 UTC"}, Hostname:"srv-aqhf7.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.086 [INFO][4179] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.116 [INFO][4179] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.116 [INFO][4179] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aqhf7.gb1.brightbox.com' Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.165 [INFO][4179] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.195 [INFO][4179] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.209 [INFO][4179] ipam/ipam.go 511: Trying affinity for 192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.218 [INFO][4179] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.226 [INFO][4179] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.227 [INFO][4179] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.229 [INFO][4179] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.234 [INFO][4179] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.248 [INFO][4179] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.4/26] block=192.168.5.0/26 handle="k8s-pod-network.8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.248 [INFO][4179] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.4/26] handle="k8s-pod-network.8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.248 [INFO][4179] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:16.309847 containerd[1514]: 2026-01-24 02:41:16.249 [INFO][4179] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.4/26] IPv6=[] ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" HandleID="k8s-pod-network.8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:16.310766 containerd[1514]: 2026-01-24 02:41:16.253 [INFO][4148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Namespace="calico-system" Pod="goldmane-666569f655-rksn8" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"310e75f7-dcbf-42b8-8e1b-0553e380b8f3", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-rksn8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali724997fc00b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:16.310766 containerd[1514]: 2026-01-24 02:41:16.253 [INFO][4148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.4/32] ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Namespace="calico-system" Pod="goldmane-666569f655-rksn8" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:16.310766 containerd[1514]: 2026-01-24 02:41:16.253 [INFO][4148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali724997fc00b ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Namespace="calico-system" Pod="goldmane-666569f655-rksn8" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:16.310766 containerd[1514]: 2026-01-24 02:41:16.274 [INFO][4148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Namespace="calico-system" Pod="goldmane-666569f655-rksn8" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:16.310766 containerd[1514]: 2026-01-24 02:41:16.275 [INFO][4148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Namespace="calico-system" Pod="goldmane-666569f655-rksn8" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"310e75f7-dcbf-42b8-8e1b-0553e380b8f3", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece", Pod:"goldmane-666569f655-rksn8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali724997fc00b", MAC:"f2:e9:a8:b3:65:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:16.310766 containerd[1514]: 2026-01-24 02:41:16.301 [INFO][4148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece" Namespace="calico-system" Pod="goldmane-666569f655-rksn8" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:16.326994 containerd[1514]: time="2026-01-24T02:41:16.321918776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:41:16.327533 containerd[1514]: time="2026-01-24T02:41:16.327348350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:41:16.327898 containerd[1514]: time="2026-01-24T02:41:16.327843951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:16.329502 containerd[1514]: time="2026-01-24T02:41:16.329368643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:16.357561 systemd[1]: Started cri-containerd-d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953.scope - libcontainer container d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953. Jan 24 02:41:16.378216 containerd[1514]: time="2026-01-24T02:41:16.377771061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:41:16.378216 containerd[1514]: time="2026-01-24T02:41:16.377870990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:41:16.378216 containerd[1514]: time="2026-01-24T02:41:16.377906249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:16.381395 containerd[1514]: time="2026-01-24T02:41:16.379321152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:16.435607 systemd[1]: Started cri-containerd-8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece.scope - libcontainer container 8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece. Jan 24 02:41:16.494954 systemd-networkd[1419]: cali23bd91aa908: Gained IPv6LL Jan 24 02:41:16.508247 containerd[1514]: time="2026-01-24T02:41:16.508197598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4jj7,Uid:21cb8328-f771-426d-aa02-0582dac338e9,Namespace:kube-system,Attempt:1,} returns sandbox id \"d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953\"" Jan 24 02:41:16.516411 containerd[1514]: time="2026-01-24T02:41:16.515561129Z" level=info msg="CreateContainer within sandbox \"d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 02:41:16.551803 containerd[1514]: time="2026-01-24T02:41:16.551641800Z" level=info msg="CreateContainer within sandbox \"d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"334ef9a28849ee4f39c1013af85e999d8ac290d30b2f544a3fe61d6b8df5b84d\"" Jan 24 02:41:16.553163 containerd[1514]: time="2026-01-24T02:41:16.553112412Z" level=info msg="StartContainer for \"334ef9a28849ee4f39c1013af85e999d8ac290d30b2f544a3fe61d6b8df5b84d\"" Jan 24 02:41:16.558739 containerd[1514]: time="2026-01-24T02:41:16.558701566Z" level=info msg="StopPodSandbox for \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\"" Jan 24 02:41:16.566733 containerd[1514]: time="2026-01-24T02:41:16.566478686Z" level=info msg="StopPodSandbox for \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\"" Jan 24 02:41:16.719451 systemd[1]: Started cri-containerd-334ef9a28849ee4f39c1013af85e999d8ac290d30b2f544a3fe61d6b8df5b84d.scope - libcontainer container 334ef9a28849ee4f39c1013af85e999d8ac290d30b2f544a3fe61d6b8df5b84d. Jan 24 02:41:16.756093 containerd[1514]: time="2026-01-24T02:41:16.755386238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rksn8,Uid:310e75f7-dcbf-42b8-8e1b-0553e380b8f3,Namespace:calico-system,Attempt:1,} returns sandbox id \"8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece\"" Jan 24 02:41:16.762974 containerd[1514]: time="2026-01-24T02:41:16.762676485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 02:41:16.849046 containerd[1514]: time="2026-01-24T02:41:16.848905187Z" level=info msg="StartContainer for \"334ef9a28849ee4f39c1013af85e999d8ac290d30b2f544a3fe61d6b8df5b84d\" returns successfully" Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.770 [INFO][4403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.774 [INFO][4403] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" iface="eth0" netns="/var/run/netns/cni-5fbc0ece-6041-7465-d7cc-5475486f9d90" Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.779 [INFO][4403] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" iface="eth0" netns="/var/run/netns/cni-5fbc0ece-6041-7465-d7cc-5475486f9d90" Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.781 [INFO][4403] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" iface="eth0" netns="/var/run/netns/cni-5fbc0ece-6041-7465-d7cc-5475486f9d90" Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.781 [INFO][4403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.781 [INFO][4403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.900 [INFO][4446] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" HandleID="k8s-pod-network.77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.902 [INFO][4446] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.903 [INFO][4446] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.924 [WARNING][4446] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" HandleID="k8s-pod-network.77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.924 [INFO][4446] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" HandleID="k8s-pod-network.77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.928 [INFO][4446] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:16.942545 containerd[1514]: 2026-01-24 02:41:16.932 [INFO][4403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:16.946184 containerd[1514]: time="2026-01-24T02:41:16.943357455Z" level=info msg="TearDown network for sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\" successfully" Jan 24 02:41:16.946184 containerd[1514]: time="2026-01-24T02:41:16.943396867Z" level=info msg="StopPodSandbox for \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\" returns successfully" Jan 24 02:41:16.946990 containerd[1514]: time="2026-01-24T02:41:16.946654481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qrxcz,Uid:a844f830-48b6-4d22-81b9-0c77ec1069d3,Namespace:kube-system,Attempt:1,}" Jan 24 02:41:16.950594 systemd[1]: run-netns-cni\x2d5fbc0ece\x2d6041\x2d7465\x2dd7cc\x2d5475486f9d90.mount: Deactivated successfully. Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.872 [INFO][4407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.873 [INFO][4407] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" iface="eth0" netns="/var/run/netns/cni-1936d225-80b2-0f7f-44ae-a64ad8c41190" Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.874 [INFO][4407] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" iface="eth0" netns="/var/run/netns/cni-1936d225-80b2-0f7f-44ae-a64ad8c41190" Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.874 [INFO][4407] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" iface="eth0" netns="/var/run/netns/cni-1936d225-80b2-0f7f-44ae-a64ad8c41190" Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.875 [INFO][4407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.875 [INFO][4407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.973 [INFO][4461] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" HandleID="k8s-pod-network.a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.974 [INFO][4461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.974 [INFO][4461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.995 [WARNING][4461] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" HandleID="k8s-pod-network.a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:16.996 [INFO][4461] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" HandleID="k8s-pod-network.a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:17.006 [INFO][4461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:17.026465 containerd[1514]: 2026-01-24 02:41:17.017 [INFO][4407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:17.033382 containerd[1514]: time="2026-01-24T02:41:17.030520296Z" level=info msg="TearDown network for sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\" successfully" Jan 24 02:41:17.033382 containerd[1514]: time="2026-01-24T02:41:17.031105308Z" level=info msg="StopPodSandbox for \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\" returns successfully" Jan 24 02:41:17.034284 containerd[1514]: time="2026-01-24T02:41:17.033800860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c647d49-zhs4g,Uid:9038de97-8842-48ce-8fdf-e9b5cfec0012,Namespace:calico-system,Attempt:1,}" Jan 24 02:41:17.089495 containerd[1514]: time="2026-01-24T02:41:17.089430113Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:17.091166 containerd[1514]: time="2026-01-24T02:41:17.091075166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 02:41:17.091403 containerd[1514]: time="2026-01-24T02:41:17.091122683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 02:41:17.097012 kubelet[2692]: E0124 02:41:17.096945 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 02:41:17.098042 kubelet[2692]: E0124 02:41:17.097029 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 02:41:17.098042 kubelet[2692]: E0124 02:41:17.097247 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rksn8_calico-system(310e75f7-dcbf-42b8-8e1b-0553e380b8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:17.100242 kubelet[2692]: E0124 02:41:17.099413 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:41:17.157334 kubelet[2692]: E0124 02:41:17.157085 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:41:17.185245 containerd[1514]: time="2026-01-24T02:41:17.185012834Z" level=info msg="StopPodSandbox for \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\"" Jan 24 02:41:17.189699 kubelet[2692]: E0124 02:41:17.188656 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:41:17.264693 systemd[1]: cri-containerd-6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d.scope: Deactivated successfully. Jan 24 02:41:17.348485 containerd[1514]: time="2026-01-24T02:41:17.344265234Z" level=info msg="shim disconnected" id=6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d namespace=k8s.io Jan 24 02:41:17.349434 containerd[1514]: time="2026-01-24T02:41:17.349402381Z" level=warning msg="cleaning up after shim disconnected" id=6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d namespace=k8s.io Jan 24 02:41:17.349576 containerd[1514]: time="2026-01-24T02:41:17.349550777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 02:41:17.391714 systemd[1]: Started sshd@13-10.230.33.130:22-157.245.70.174:58576.service - OpenSSH per-connection server daemon (157.245.70.174:58576). Jan 24 02:41:17.560616 containerd[1514]: time="2026-01-24T02:41:17.560550975Z" level=info msg="StopPodSandbox for \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\"" Jan 24 02:41:17.578737 containerd[1514]: time="2026-01-24T02:41:17.578356217Z" level=info msg="StopPodSandbox for \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\"" Jan 24 02:41:17.589687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d-rootfs.mount: Deactivated successfully. Jan 24 02:41:17.589849 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d-shm.mount: Deactivated successfully. Jan 24 02:41:17.589985 systemd[1]: run-netns-cni\x2d1936d225\x2d80b2\x2d0f7f\x2d44ae\x2da64ad8c41190.mount: Deactivated successfully. Jan 24 02:41:17.595693 sshd[4527]: Connection closed by authenticating user root 157.245.70.174 port 58576 [preauth] Jan 24 02:41:17.599636 systemd[1]: sshd@13-10.230.33.130:22-157.245.70.174:58576.service: Deactivated successfully. Jan 24 02:41:17.646676 systemd-networkd[1419]: cali36a0a5da7d2: Gained IPv6LL Jan 24 02:41:17.765192 systemd-networkd[1419]: calida1c6eb74b3: Link UP Jan 24 02:41:17.766616 systemd-networkd[1419]: calida1c6eb74b3: Gained carrier Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.064 [INFO][4474] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.169 [INFO][4474] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0 coredns-674b8bbfcf- kube-system a844f830-48b6-4d22-81b9-0c77ec1069d3 945 0 2026-01-24 02:40:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-aqhf7.gb1.brightbox.com coredns-674b8bbfcf-qrxcz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida1c6eb74b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrxcz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.171 [INFO][4474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrxcz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.548 [INFO][4504] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" HandleID="k8s-pod-network.76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.549 [INFO][4504] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" HandleID="k8s-pod-network.76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122770), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-aqhf7.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-qrxcz", "timestamp":"2026-01-24 02:41:17.548269163 +0000 UTC"}, Hostname:"srv-aqhf7.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.549 [INFO][4504] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.549 [INFO][4504] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.549 [INFO][4504] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aqhf7.gb1.brightbox.com' Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.600 [INFO][4504] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.625 [INFO][4504] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.654 [INFO][4504] ipam/ipam.go 511: Trying affinity for 192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.665 [INFO][4504] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.672 [INFO][4504] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.672 [INFO][4504] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.682 [INFO][4504] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8 Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.706 [INFO][4504] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.735 [INFO][4504] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.5/26] block=192.168.5.0/26 handle="k8s-pod-network.76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.735 [INFO][4504] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.5/26] handle="k8s-pod-network.76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.735 [INFO][4504] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:17.809967 containerd[1514]: 2026-01-24 02:41:17.735 [INFO][4504] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.5/26] IPv6=[] ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" HandleID="k8s-pod-network.76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:17.813051 containerd[1514]: 2026-01-24 02:41:17.747 [INFO][4474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrxcz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a844f830-48b6-4d22-81b9-0c77ec1069d3", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-qrxcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida1c6eb74b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:17.813051 containerd[1514]: 2026-01-24 02:41:17.747 [INFO][4474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.5/32] ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrxcz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:17.813051 containerd[1514]: 2026-01-24 02:41:17.748 [INFO][4474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida1c6eb74b3 ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrxcz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:17.813051 containerd[1514]: 2026-01-24 02:41:17.767 [INFO][4474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrxcz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:17.813051 containerd[1514]: 2026-01-24 02:41:17.768 [INFO][4474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrxcz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a844f830-48b6-4d22-81b9-0c77ec1069d3", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8", Pod:"coredns-674b8bbfcf-qrxcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida1c6eb74b3", MAC:"1e:a4:5d:cb:c8:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:17.813051 containerd[1514]: 2026-01-24 02:41:17.803 [INFO][4474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8" Namespace="kube-system" Pod="coredns-674b8bbfcf-qrxcz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:17.841158 kubelet[2692]: I0124 02:41:17.840676 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b4jj7" podStartSLOduration=47.840617276 podStartE2EDuration="47.840617276s" podCreationTimestamp="2026-01-24 02:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 02:41:17.271663713 +0000 UTC m=+53.984133895" watchObservedRunningTime="2026-01-24 02:41:17.840617276 +0000 UTC m=+54.553087438" Jan 24 02:41:17.869204 systemd-networkd[1419]: cali5822c98336f: Link DOWN Jan 24 02:41:17.869217 systemd-networkd[1419]: cali5822c98336f: Lost carrier Jan 24 02:41:18.007763 containerd[1514]: time="2026-01-24T02:41:18.006661561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:41:18.007763 containerd[1514]: time="2026-01-24T02:41:18.006766894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:41:18.007763 containerd[1514]: time="2026-01-24T02:41:18.006830363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:18.007763 containerd[1514]: time="2026-01-24T02:41:18.007034836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:18.030474 systemd-networkd[1419]: cali724997fc00b: Gained IPv6LL Jan 24 02:41:18.039782 systemd-networkd[1419]: cali704490c9e47: Link UP Jan 24 02:41:18.042447 systemd-networkd[1419]: cali704490c9e47: Gained carrier Jan 24 02:41:18.091597 systemd[1]: Started cri-containerd-76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8.scope - libcontainer container 76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8. Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.288 [INFO][4484] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.396 [INFO][4484] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0 calico-kube-controllers-7d5c647d49- calico-system 9038de97-8842-48ce-8fdf-e9b5cfec0012 948 0 2026-01-24 02:40:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d5c647d49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-aqhf7.gb1.brightbox.com calico-kube-controllers-7d5c647d49-zhs4g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali704490c9e47 [] [] }} ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Namespace="calico-system" Pod="calico-kube-controllers-7d5c647d49-zhs4g" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.396 [INFO][4484] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Namespace="calico-system" Pod="calico-kube-controllers-7d5c647d49-zhs4g" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.747 [INFO][4535] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" HandleID="k8s-pod-network.6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.749 [INFO][4535] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" HandleID="k8s-pod-network.6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032b380), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-aqhf7.gb1.brightbox.com", "pod":"calico-kube-controllers-7d5c647d49-zhs4g", "timestamp":"2026-01-24 02:41:17.74756777 +0000 UTC"}, Hostname:"srv-aqhf7.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.750 [INFO][4535] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.751 [INFO][4535] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.751 [INFO][4535] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aqhf7.gb1.brightbox.com' Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.814 [INFO][4535] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.857 [INFO][4535] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.903 [INFO][4535] ipam/ipam.go 511: Trying affinity for 192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.922 [INFO][4535] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.932 [INFO][4535] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.933 [INFO][4535] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.941 [INFO][4535] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069 Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.963 [INFO][4535] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.981 [INFO][4535] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.6/26] block=192.168.5.0/26 handle="k8s-pod-network.6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.982 [INFO][4535] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.6/26] handle="k8s-pod-network.6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.982 [INFO][4535] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:18.137273 containerd[1514]: 2026-01-24 02:41:17.983 [INFO][4535] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.6/26] IPv6=[] ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" HandleID="k8s-pod-network.6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:18.140780 containerd[1514]: 2026-01-24 02:41:18.004 [INFO][4484] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Namespace="calico-system" Pod="calico-kube-controllers-7d5c647d49-zhs4g" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0", GenerateName:"calico-kube-controllers-7d5c647d49-", Namespace:"calico-system", SelfLink:"", UID:"9038de97-8842-48ce-8fdf-e9b5cfec0012", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d5c647d49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-7d5c647d49-zhs4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali704490c9e47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:18.140780 containerd[1514]: 2026-01-24 02:41:18.004 [INFO][4484] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.6/32] ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Namespace="calico-system" Pod="calico-kube-controllers-7d5c647d49-zhs4g" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:18.140780 containerd[1514]: 2026-01-24 02:41:18.005 [INFO][4484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali704490c9e47 ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Namespace="calico-system" Pod="calico-kube-controllers-7d5c647d49-zhs4g" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:18.140780 containerd[1514]: 2026-01-24 02:41:18.057 [INFO][4484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Namespace="calico-system" Pod="calico-kube-controllers-7d5c647d49-zhs4g" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:18.140780 containerd[1514]: 2026-01-24 02:41:18.062 [INFO][4484] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Namespace="calico-system" Pod="calico-kube-controllers-7d5c647d49-zhs4g" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0", GenerateName:"calico-kube-controllers-7d5c647d49-", Namespace:"calico-system", SelfLink:"", UID:"9038de97-8842-48ce-8fdf-e9b5cfec0012", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d5c647d49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069", Pod:"calico-kube-controllers-7d5c647d49-zhs4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali704490c9e47", MAC:"6e:7a:65:3d:3f:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:18.140780 containerd[1514]: 2026-01-24 02:41:18.111 [INFO][4484] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069" Namespace="calico-system" Pod="calico-kube-controllers-7d5c647d49-zhs4g" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:18.184141 kubelet[2692]: I0124 02:41:18.182837 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:18.190359 kubelet[2692]: E0124 02:41:18.189278 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:41:18.210524 containerd[1514]: time="2026-01-24T02:41:18.209872040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:41:18.210524 containerd[1514]: time="2026-01-24T02:41:18.209957358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:41:18.210524 containerd[1514]: time="2026-01-24T02:41:18.209981140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:18.210524 containerd[1514]: time="2026-01-24T02:41:18.210094787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:18.286575 systemd[1]: Started cri-containerd-6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069.scope - libcontainer container 6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069. Jan 24 02:41:18.440625 containerd[1514]: time="2026-01-24T02:41:18.440204548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qrxcz,Uid:a844f830-48b6-4d22-81b9-0c77ec1069d3,Namespace:kube-system,Attempt:1,} returns sandbox id \"76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8\"" Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:17.850 [INFO][4593] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:17.853 [INFO][4593] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" iface="eth0" netns="/var/run/netns/cni-6f17cc04-8d72-8dcb-2e0c-bcc63f121352" Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:17.854 [INFO][4593] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" iface="eth0" netns="/var/run/netns/cni-6f17cc04-8d72-8dcb-2e0c-bcc63f121352" Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:17.893 [INFO][4593] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" after=39.374574ms iface="eth0" netns="/var/run/netns/cni-6f17cc04-8d72-8dcb-2e0c-bcc63f121352" Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:17.894 [INFO][4593] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:17.896 [INFO][4593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:18.119 [INFO][4618] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:18.119 [INFO][4618] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:18.119 [INFO][4618] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:18.420 [INFO][4618] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:18.420 [INFO][4618] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:18.425 [INFO][4618] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:18.443119 containerd[1514]: 2026-01-24 02:41:18.429 [INFO][4593] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:18.446538 containerd[1514]: time="2026-01-24T02:41:18.443834107Z" level=info msg="TearDown network for sandbox \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\" successfully" Jan 24 02:41:18.446538 containerd[1514]: time="2026-01-24T02:41:18.443905054Z" level=info msg="StopPodSandbox for \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\" returns successfully" Jan 24 02:41:18.446538 containerd[1514]: time="2026-01-24T02:41:18.444944633Z" level=info msg="StopPodSandbox for \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\"" Jan 24 02:41:18.457158 containerd[1514]: time="2026-01-24T02:41:18.457094207Z" level=info msg="CreateContainer within sandbox \"76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.035 [INFO][4581] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.061 [INFO][4581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" iface="eth0" netns="/var/run/netns/cni-1d03ce70-b2a6-8b92-125f-c80ac6e1894b" Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.062 [INFO][4581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" iface="eth0" netns="/var/run/netns/cni-1d03ce70-b2a6-8b92-125f-c80ac6e1894b" Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.062 [INFO][4581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" iface="eth0" netns="/var/run/netns/cni-1d03ce70-b2a6-8b92-125f-c80ac6e1894b" Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.062 [INFO][4581] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.062 [INFO][4581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.368 [INFO][4655] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" HandleID="k8s-pod-network.3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.369 [INFO][4655] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.425 [INFO][4655] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.460 [WARNING][4655] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" HandleID="k8s-pod-network.3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.460 [INFO][4655] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" HandleID="k8s-pod-network.3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.466 [INFO][4655] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:18.497875 containerd[1514]: 2026-01-24 02:41:18.478 [INFO][4581] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:18.500880 containerd[1514]: time="2026-01-24T02:41:18.499531704Z" level=info msg="TearDown network for sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\" successfully" Jan 24 02:41:18.500880 containerd[1514]: time="2026-01-24T02:41:18.499571382Z" level=info msg="StopPodSandbox for \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\" returns successfully" Jan 24 02:41:18.503296 containerd[1514]: time="2026-01-24T02:41:18.502197717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rrnz,Uid:9f63ab66-558d-4f53-8717-746e17757652,Namespace:calico-system,Attempt:1,}" Jan 24 02:41:18.513367 containerd[1514]: time="2026-01-24T02:41:18.511758657Z" level=info msg="CreateContainer within sandbox \"76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee495d89fb55f0fd10de44a6eeffca87484d7fa395c503dcb39f869ab9d83f60\"" Jan 24 02:41:18.517577 containerd[1514]: time="2026-01-24T02:41:18.516925516Z" level=info msg="StartContainer for \"ee495d89fb55f0fd10de44a6eeffca87484d7fa395c503dcb39f869ab9d83f60\"" Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.106 [INFO][4588] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.112 [INFO][4588] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" iface="eth0" netns="/var/run/netns/cni-95ca2f4d-907c-011a-54e1-e94ba68e121b" Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.117 [INFO][4588] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" iface="eth0" netns="/var/run/netns/cni-95ca2f4d-907c-011a-54e1-e94ba68e121b" Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.136 [INFO][4588] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" iface="eth0" netns="/var/run/netns/cni-95ca2f4d-907c-011a-54e1-e94ba68e121b" Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.136 [INFO][4588] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.136 [INFO][4588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.379 [INFO][4669] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" HandleID="k8s-pod-network.89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.381 [INFO][4669] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.468 [INFO][4669] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.499 [WARNING][4669] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" HandleID="k8s-pod-network.89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.499 [INFO][4669] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" HandleID="k8s-pod-network.89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.506 [INFO][4669] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:18.527185 containerd[1514]: 2026-01-24 02:41:18.517 [INFO][4588] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:18.528993 containerd[1514]: time="2026-01-24T02:41:18.528200786Z" level=info msg="TearDown network for sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\" successfully" Jan 24 02:41:18.528993 containerd[1514]: time="2026-01-24T02:41:18.528558218Z" level=info msg="StopPodSandbox for \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\" returns successfully" Jan 24 02:41:18.532511 containerd[1514]: time="2026-01-24T02:41:18.532383775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcbb787c9-46gqx,Uid:57ed0d28-f7e6-4e62-8d12-5c54e0de4159,Namespace:calico-apiserver,Attempt:1,}" Jan 24 02:41:18.579032 systemd[1]: run-netns-cni\x2d6f17cc04\x2d8d72\x2d8dcb\x2d2e0c\x2dbcc63f121352.mount: Deactivated successfully. Jan 24 02:41:18.579209 systemd[1]: run-netns-cni\x2d1d03ce70\x2db2a6\x2d8b92\x2d125f\x2dc80ac6e1894b.mount: Deactivated successfully. Jan 24 02:41:18.579338 systemd[1]: run-netns-cni\x2d95ca2f4d\x2d907c\x2d011a\x2d54e1\x2de94ba68e121b.mount: Deactivated successfully. Jan 24 02:41:18.746673 systemd[1]: Started cri-containerd-ee495d89fb55f0fd10de44a6eeffca87484d7fa395c503dcb39f869ab9d83f60.scope - libcontainer container ee495d89fb55f0fd10de44a6eeffca87484d7fa395c503dcb39f869ab9d83f60. Jan 24 02:41:18.857084 containerd[1514]: time="2026-01-24T02:41:18.856857147Z" level=info msg="StartContainer for \"ee495d89fb55f0fd10de44a6eeffca87484d7fa395c503dcb39f869ab9d83f60\" returns successfully" Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.698 [WARNING][4751] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0", GenerateName:"whisker-7f8b76b7d6-", Namespace:"calico-system", SelfLink:"", UID:"28182695-8b28-4eab-884e-ccb6e32ecfc7", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f8b76b7d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d", Pod:"whisker-7f8b76b7d6-25fqp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5822c98336f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.700 [INFO][4751] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.700 [INFO][4751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" iface="eth0" netns="" Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.701 [INFO][4751] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.701 [INFO][4751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.873 [INFO][4801] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.874 [INFO][4801] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.875 [INFO][4801] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.896 [WARNING][4801] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.897 [INFO][4801] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.899 [INFO][4801] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:18.909943 containerd[1514]: 2026-01-24 02:41:18.903 [INFO][4751] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:18.909943 containerd[1514]: time="2026-01-24T02:41:18.909644132Z" level=info msg="TearDown network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\" successfully" Jan 24 02:41:18.909943 containerd[1514]: time="2026-01-24T02:41:18.909689811Z" level=info msg="StopPodSandbox for \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\" returns successfully" Jan 24 02:41:19.014262 kubelet[2692]: I0124 02:41:19.012939 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28182695-8b28-4eab-884e-ccb6e32ecfc7-whisker-backend-key-pair\") pod \"28182695-8b28-4eab-884e-ccb6e32ecfc7\" (UID: \"28182695-8b28-4eab-884e-ccb6e32ecfc7\") " Jan 24 02:41:19.014262 kubelet[2692]: I0124 02:41:19.013049 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrp7\" (UniqueName: \"kubernetes.io/projected/28182695-8b28-4eab-884e-ccb6e32ecfc7-kube-api-access-9nrp7\") pod \"28182695-8b28-4eab-884e-ccb6e32ecfc7\" (UID: \"28182695-8b28-4eab-884e-ccb6e32ecfc7\") " Jan 24 02:41:19.014262 kubelet[2692]: I0124 02:41:19.013097 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28182695-8b28-4eab-884e-ccb6e32ecfc7-whisker-ca-bundle\") pod \"28182695-8b28-4eab-884e-ccb6e32ecfc7\" (UID: \"28182695-8b28-4eab-884e-ccb6e32ecfc7\") " Jan 24 02:41:19.053602 kubelet[2692]: I0124 02:41:19.049931 2692 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28182695-8b28-4eab-884e-ccb6e32ecfc7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "28182695-8b28-4eab-884e-ccb6e32ecfc7" (UID: "28182695-8b28-4eab-884e-ccb6e32ecfc7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 02:41:19.066969 kubelet[2692]: I0124 02:41:19.063401 2692 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28182695-8b28-4eab-884e-ccb6e32ecfc7-kube-api-access-9nrp7" (OuterVolumeSpecName: "kube-api-access-9nrp7") pod "28182695-8b28-4eab-884e-ccb6e32ecfc7" (UID: "28182695-8b28-4eab-884e-ccb6e32ecfc7"). InnerVolumeSpecName "kube-api-access-9nrp7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 02:41:19.084978 kubelet[2692]: I0124 02:41:19.084461 2692 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28182695-8b28-4eab-884e-ccb6e32ecfc7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "28182695-8b28-4eab-884e-ccb6e32ecfc7" (UID: "28182695-8b28-4eab-884e-ccb6e32ecfc7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 02:41:19.113452 containerd[1514]: time="2026-01-24T02:41:19.113149042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d5c647d49-zhs4g,Uid:9038de97-8842-48ce-8fdf-e9b5cfec0012,Namespace:calico-system,Attempt:1,} returns sandbox id \"6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069\"" Jan 24 02:41:19.114027 kubelet[2692]: I0124 02:41:19.113983 2692 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28182695-8b28-4eab-884e-ccb6e32ecfc7-whisker-ca-bundle\") on node \"srv-aqhf7.gb1.brightbox.com\" DevicePath \"\"" Jan 24 02:41:19.114148 kubelet[2692]: I0124 02:41:19.114030 2692 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28182695-8b28-4eab-884e-ccb6e32ecfc7-whisker-backend-key-pair\") on node \"srv-aqhf7.gb1.brightbox.com\" DevicePath \"\"" Jan 24 02:41:19.114148 kubelet[2692]: I0124 02:41:19.114057 2692 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9nrp7\" (UniqueName: \"kubernetes.io/projected/28182695-8b28-4eab-884e-ccb6e32ecfc7-kube-api-access-9nrp7\") on node \"srv-aqhf7.gb1.brightbox.com\" DevicePath \"\"" Jan 24 02:41:19.119347 containerd[1514]: time="2026-01-24T02:41:19.119292936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 02:41:19.154614 systemd-networkd[1419]: cali31c5cf595f2: Link UP Jan 24 02:41:19.158036 systemd-networkd[1419]: cali31c5cf595f2: Gained carrier Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:18.863 [INFO][4762] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0 csi-node-driver- calico-system 9f63ab66-558d-4f53-8717-746e17757652 974 0 2026-01-24 02:40:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-aqhf7.gb1.brightbox.com csi-node-driver-8rrnz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali31c5cf595f2 [] [] }} ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Namespace="calico-system" Pod="csi-node-driver-8rrnz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:18.863 [INFO][4762] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Namespace="calico-system" Pod="csi-node-driver-8rrnz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:18.998 [INFO][4837] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" HandleID="k8s-pod-network.d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.000 [INFO][4837] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" HandleID="k8s-pod-network.d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000100430), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-aqhf7.gb1.brightbox.com", "pod":"csi-node-driver-8rrnz", "timestamp":"2026-01-24 02:41:18.998280752 +0000 UTC"}, Hostname:"srv-aqhf7.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.000 [INFO][4837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.000 [INFO][4837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.000 [INFO][4837] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aqhf7.gb1.brightbox.com' Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.020 [INFO][4837] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.036 [INFO][4837] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.063 [INFO][4837] ipam/ipam.go 511: Trying affinity for 192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.074 [INFO][4837] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.094 [INFO][4837] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.094 [INFO][4837] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.104 [INFO][4837] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187 Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.112 [INFO][4837] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.131 [INFO][4837] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.7/26] block=192.168.5.0/26 handle="k8s-pod-network.d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.131 [INFO][4837] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.7/26] handle="k8s-pod-network.d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.132 [INFO][4837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:19.248307 containerd[1514]: 2026-01-24 02:41:19.132 [INFO][4837] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.7/26] IPv6=[] ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" HandleID="k8s-pod-network.d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:19.251097 containerd[1514]: 2026-01-24 02:41:19.139 [INFO][4762] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Namespace="calico-system" Pod="csi-node-driver-8rrnz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f63ab66-558d-4f53-8717-746e17757652", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-8rrnz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31c5cf595f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:19.251097 containerd[1514]: 2026-01-24 02:41:19.140 [INFO][4762] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.7/32] ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Namespace="calico-system" Pod="csi-node-driver-8rrnz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:19.251097 containerd[1514]: 2026-01-24 02:41:19.140 [INFO][4762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31c5cf595f2 ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Namespace="calico-system" Pod="csi-node-driver-8rrnz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:19.251097 containerd[1514]: 2026-01-24 02:41:19.173 [INFO][4762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Namespace="calico-system" Pod="csi-node-driver-8rrnz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:19.251097 containerd[1514]: 2026-01-24 02:41:19.181 [INFO][4762] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Namespace="calico-system" Pod="csi-node-driver-8rrnz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f63ab66-558d-4f53-8717-746e17757652", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187", Pod:"csi-node-driver-8rrnz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31c5cf595f2", MAC:"8e:dd:5e:24:6c:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:19.251097 containerd[1514]: 2026-01-24 02:41:19.231 [INFO][4762] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187" Namespace="calico-system" Pod="csi-node-driver-8rrnz" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:19.310523 systemd-networkd[1419]: cali704490c9e47: Gained IPv6LL Jan 24 02:41:19.322847 kubelet[2692]: I0124 02:41:19.322573 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qrxcz" podStartSLOduration=49.322544774 podStartE2EDuration="49.322544774s" podCreationTimestamp="2026-01-24 02:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 02:41:19.314301817 +0000 UTC m=+56.026771996" watchObservedRunningTime="2026-01-24 02:41:19.322544774 +0000 UTC m=+56.035014942" Jan 24 02:41:19.329170 containerd[1514]: time="2026-01-24T02:41:19.328957748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:41:19.330640 containerd[1514]: time="2026-01-24T02:41:19.330004184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:41:19.330640 containerd[1514]: time="2026-01-24T02:41:19.330035757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:19.330640 containerd[1514]: time="2026-01-24T02:41:19.330186050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:19.408549 systemd[1]: Started cri-containerd-d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187.scope - libcontainer container d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187. Jan 24 02:41:19.415469 systemd[1]: Removed slice kubepods-besteffort-pod28182695_8b28_4eab_884e_ccb6e32ecfc7.slice - libcontainer container kubepods-besteffort-pod28182695_8b28_4eab_884e_ccb6e32ecfc7.slice. Jan 24 02:41:19.431692 containerd[1514]: time="2026-01-24T02:41:19.431626069Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:19.439897 containerd[1514]: time="2026-01-24T02:41:19.439618953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 02:41:19.439897 containerd[1514]: time="2026-01-24T02:41:19.439807669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 02:41:19.441016 kubelet[2692]: E0124 02:41:19.440479 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 02:41:19.441016 kubelet[2692]: E0124 02:41:19.440601 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 02:41:19.441016 kubelet[2692]: E0124 02:41:19.440921 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dbhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d5c647d49-zhs4g_calico-system(9038de97-8842-48ce-8fdf-e9b5cfec0012): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:19.443286 kubelet[2692]: E0124 02:41:19.442984 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" podUID="9038de97-8842-48ce-8fdf-e9b5cfec0012" Jan 24 02:41:19.450439 systemd-networkd[1419]: cali8e247cf4083: Link UP Jan 24 02:41:19.454624 kernel: bpftool[4907]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 02:41:19.456277 systemd-networkd[1419]: cali8e247cf4083: Gained carrier Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:18.857 [INFO][4781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0 calico-apiserver-7bcbb787c9- calico-apiserver 57ed0d28-f7e6-4e62-8d12-5c54e0de4159 976 0 2026-01-24 02:40:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bcbb787c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-aqhf7.gb1.brightbox.com calico-apiserver-7bcbb787c9-46gqx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e247cf4083 [] [] }} ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-46gqx" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:18.859 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-46gqx" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.018 [INFO][4835] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" HandleID="k8s-pod-network.e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.023 [INFO][4835] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" HandleID="k8s-pod-network.e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031c130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-aqhf7.gb1.brightbox.com", "pod":"calico-apiserver-7bcbb787c9-46gqx", "timestamp":"2026-01-24 02:41:19.018874981 +0000 UTC"}, Hostname:"srv-aqhf7.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.023 [INFO][4835] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.132 [INFO][4835] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.132 [INFO][4835] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aqhf7.gb1.brightbox.com' Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.167 [INFO][4835] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.242 [INFO][4835] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.263 [INFO][4835] ipam/ipam.go 511: Trying affinity for 192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.274 [INFO][4835] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.290 [INFO][4835] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.291 [INFO][4835] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.308 [INFO][4835] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6 Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.336 [INFO][4835] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.368 [INFO][4835] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.8/26] block=192.168.5.0/26 handle="k8s-pod-network.e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.368 [INFO][4835] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.8/26] handle="k8s-pod-network.e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.368 [INFO][4835] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:19.498708 containerd[1514]: 2026-01-24 02:41:19.368 [INFO][4835] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.8/26] IPv6=[] ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" HandleID="k8s-pod-network.e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:19.506650 containerd[1514]: 2026-01-24 02:41:19.392 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-46gqx" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0", GenerateName:"calico-apiserver-7bcbb787c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"57ed0d28-f7e6-4e62-8d12-5c54e0de4159", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcbb787c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7bcbb787c9-46gqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e247cf4083", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:19.506650 containerd[1514]: 2026-01-24 02:41:19.411 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.8/32] ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-46gqx" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:19.506650 containerd[1514]: 2026-01-24 02:41:19.411 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e247cf4083 ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-46gqx" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:19.506650 containerd[1514]: 2026-01-24 02:41:19.458 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-46gqx" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:19.506650 containerd[1514]: 2026-01-24 02:41:19.462 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-46gqx" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0", GenerateName:"calico-apiserver-7bcbb787c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"57ed0d28-f7e6-4e62-8d12-5c54e0de4159", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcbb787c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6", Pod:"calico-apiserver-7bcbb787c9-46gqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e247cf4083", MAC:"f2:1a:65:4f:5c:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:19.506650 containerd[1514]: 2026-01-24 02:41:19.488 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6" Namespace="calico-apiserver" Pod="calico-apiserver-7bcbb787c9-46gqx" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:19.583588 systemd[1]: var-lib-kubelet-pods-28182695\x2d8b28\x2d4eab\x2d884e\x2dccb6e32ecfc7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9nrp7.mount: Deactivated successfully. Jan 24 02:41:19.583753 systemd[1]: var-lib-kubelet-pods-28182695\x2d8b28\x2d4eab\x2d884e\x2dccb6e32ecfc7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 02:41:19.602270 containerd[1514]: time="2026-01-24T02:41:19.602011486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rrnz,Uid:9f63ab66-558d-4f53-8717-746e17757652,Namespace:calico-system,Attempt:1,} returns sandbox id \"d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187\"" Jan 24 02:41:19.609080 containerd[1514]: time="2026-01-24T02:41:19.608689646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 02:41:19.647621 containerd[1514]: time="2026-01-24T02:41:19.647124795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:41:19.647621 containerd[1514]: time="2026-01-24T02:41:19.647254078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:41:19.647621 containerd[1514]: time="2026-01-24T02:41:19.647272272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:19.648473 containerd[1514]: time="2026-01-24T02:41:19.648305530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:19.692629 systemd[1]: run-containerd-runc-k8s.io-e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6-runc.2Fl97b.mount: Deactivated successfully. Jan 24 02:41:19.694538 systemd-networkd[1419]: calida1c6eb74b3: Gained IPv6LL Jan 24 02:41:19.705639 systemd[1]: Started cri-containerd-e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6.scope - libcontainer container e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6. Jan 24 02:41:19.776314 systemd[1]: Created slice kubepods-besteffort-pod4eea0edb_bf06_4866_bb21_e6ce9438b127.slice - libcontainer container kubepods-besteffort-pod4eea0edb_bf06_4866_bb21_e6ce9438b127.slice. Jan 24 02:41:19.870436 kubelet[2692]: I0124 02:41:19.869958 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956rv\" (UniqueName: \"kubernetes.io/projected/4eea0edb-bf06-4866-bb21-e6ce9438b127-kube-api-access-956rv\") pod \"whisker-db9cdc476-5b4vs\" (UID: \"4eea0edb-bf06-4866-bb21-e6ce9438b127\") " pod="calico-system/whisker-db9cdc476-5b4vs" Jan 24 02:41:19.870436 kubelet[2692]: I0124 02:41:19.870031 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4eea0edb-bf06-4866-bb21-e6ce9438b127-whisker-backend-key-pair\") pod \"whisker-db9cdc476-5b4vs\" (UID: \"4eea0edb-bf06-4866-bb21-e6ce9438b127\") " pod="calico-system/whisker-db9cdc476-5b4vs" Jan 24 02:41:19.870436 kubelet[2692]: I0124 02:41:19.870083 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4eea0edb-bf06-4866-bb21-e6ce9438b127-whisker-ca-bundle\") pod \"whisker-db9cdc476-5b4vs\" (UID: \"4eea0edb-bf06-4866-bb21-e6ce9438b127\") " pod="calico-system/whisker-db9cdc476-5b4vs" Jan 24 02:41:19.919360 containerd[1514]: time="2026-01-24T02:41:19.918757643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bcbb787c9-46gqx,Uid:57ed0d28-f7e6-4e62-8d12-5c54e0de4159,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6\"" Jan 24 02:41:19.946356 containerd[1514]: time="2026-01-24T02:41:19.945042809Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:19.950193 containerd[1514]: time="2026-01-24T02:41:19.950030236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 02:41:19.950649 containerd[1514]: time="2026-01-24T02:41:19.950497701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 02:41:19.951067 kubelet[2692]: E0124 02:41:19.950933 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 02:41:19.951340 kubelet[2692]: E0124 02:41:19.951291 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 02:41:19.954127 kubelet[2692]: E0124 02:41:19.953845 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7pg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rrnz_calico-system(9f63ab66-558d-4f53-8717-746e17757652): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:19.955127 containerd[1514]: time="2026-01-24T02:41:19.954856290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 02:41:20.088996 containerd[1514]: time="2026-01-24T02:41:20.088243043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-db9cdc476-5b4vs,Uid:4eea0edb-bf06-4866-bb21-e6ce9438b127,Namespace:calico-system,Attempt:0,}" Jan 24 02:41:20.206708 systemd-networkd[1419]: cali31c5cf595f2: Gained IPv6LL Jan 24 02:41:20.278354 kubelet[2692]: E0124 02:41:20.278013 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" podUID="9038de97-8842-48ce-8fdf-e9b5cfec0012" Jan 24 02:41:20.321881 containerd[1514]: time="2026-01-24T02:41:20.321639378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:20.324827 containerd[1514]: time="2026-01-24T02:41:20.324776692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 02:41:20.325104 containerd[1514]: time="2026-01-24T02:41:20.324947522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 02:41:20.326294 kubelet[2692]: E0124 02:41:20.325473 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:20.326294 kubelet[2692]: E0124 02:41:20.325557 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:20.326294 kubelet[2692]: E0124 02:41:20.325918 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtjhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bcbb787c9-46gqx_calico-apiserver(57ed0d28-f7e6-4e62-8d12-5c54e0de4159): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:20.327055 containerd[1514]: time="2026-01-24T02:41:20.326553736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 02:41:20.327452 kubelet[2692]: E0124 02:41:20.327247 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" podUID="57ed0d28-f7e6-4e62-8d12-5c54e0de4159" Jan 24 02:41:20.431071 systemd-networkd[1419]: cali2c42bf157e9: Link UP Jan 24 02:41:20.431447 systemd-networkd[1419]: cali2c42bf157e9: Gained carrier Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.209 [INFO][4974] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0 whisker-db9cdc476- calico-system 4eea0edb-bf06-4866-bb21-e6ce9438b127 1031 0 2026-01-24 02:41:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:db9cdc476 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-aqhf7.gb1.brightbox.com whisker-db9cdc476-5b4vs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2c42bf157e9 [] [] }} ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Namespace="calico-system" Pod="whisker-db9cdc476-5b4vs" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.210 [INFO][4974] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Namespace="calico-system" Pod="whisker-db9cdc476-5b4vs" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.266 [INFO][4986] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" HandleID="k8s-pod-network.e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.267 [INFO][4986] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" HandleID="k8s-pod-network.e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e890), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-aqhf7.gb1.brightbox.com", "pod":"whisker-db9cdc476-5b4vs", "timestamp":"2026-01-24 02:41:20.266972574 +0000 UTC"}, Hostname:"srv-aqhf7.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.267 [INFO][4986] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.267 [INFO][4986] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.267 [INFO][4986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aqhf7.gb1.brightbox.com' Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.297 [INFO][4986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.313 [INFO][4986] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.339 [INFO][4986] ipam/ipam.go 511: Trying affinity for 192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.348 [INFO][4986] ipam/ipam.go 158: Attempting to load block cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.369 [INFO][4986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.369 [INFO][4986] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.374 [INFO][4986] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4 Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.387 [INFO][4986] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.416 [INFO][4986] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.5.9/26] block=192.168.5.0/26 handle="k8s-pod-network.e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.418 [INFO][4986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.5.9/26] handle="k8s-pod-network.e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" host="srv-aqhf7.gb1.brightbox.com" Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.418 [INFO][4986] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:20.482464 containerd[1514]: 2026-01-24 02:41:20.418 [INFO][4986] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.5.9/26] IPv6=[] ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" HandleID="k8s-pod-network.e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0" Jan 24 02:41:20.485013 containerd[1514]: 2026-01-24 02:41:20.420 [INFO][4974] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Namespace="calico-system" Pod="whisker-db9cdc476-5b4vs" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0", GenerateName:"whisker-db9cdc476-", Namespace:"calico-system", SelfLink:"", UID:"4eea0edb-bf06-4866-bb21-e6ce9438b127", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 41, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"db9cdc476", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"", Pod:"whisker-db9cdc476-5b4vs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2c42bf157e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:20.485013 containerd[1514]: 2026-01-24 02:41:20.420 [INFO][4974] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.9/32] ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Namespace="calico-system" Pod="whisker-db9cdc476-5b4vs" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0" Jan 24 02:41:20.485013 containerd[1514]: 2026-01-24 02:41:20.420 [INFO][4974] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c42bf157e9 ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Namespace="calico-system" Pod="whisker-db9cdc476-5b4vs" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0" Jan 24 02:41:20.485013 containerd[1514]: 2026-01-24 02:41:20.430 [INFO][4974] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Namespace="calico-system" Pod="whisker-db9cdc476-5b4vs" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0" Jan 24 02:41:20.485013 containerd[1514]: 2026-01-24 02:41:20.433 [INFO][4974] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Namespace="calico-system" Pod="whisker-db9cdc476-5b4vs" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0", GenerateName:"whisker-db9cdc476-", Namespace:"calico-system", SelfLink:"", UID:"4eea0edb-bf06-4866-bb21-e6ce9438b127", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 41, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"db9cdc476", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4", Pod:"whisker-db9cdc476-5b4vs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2c42bf157e9", MAC:"5a:e4:6d:a1:5b:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:20.485013 containerd[1514]: 2026-01-24 02:41:20.475 [INFO][4974] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4" Namespace="calico-system" Pod="whisker-db9cdc476-5b4vs" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--db9cdc476--5b4vs-eth0" Jan 24 02:41:20.528972 containerd[1514]: time="2026-01-24T02:41:20.528542697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 02:41:20.529142 containerd[1514]: time="2026-01-24T02:41:20.529056489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 02:41:20.529193 containerd[1514]: time="2026-01-24T02:41:20.529145568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:20.530708 containerd[1514]: time="2026-01-24T02:41:20.530548670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 02:41:20.612651 systemd[1]: Started cri-containerd-e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4.scope - libcontainer container e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4. Jan 24 02:41:20.656160 containerd[1514]: time="2026-01-24T02:41:20.656094236Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:20.659967 containerd[1514]: time="2026-01-24T02:41:20.659912202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 02:41:20.660078 containerd[1514]: time="2026-01-24T02:41:20.660038491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 02:41:20.660981 kubelet[2692]: E0124 02:41:20.660750 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 02:41:20.663209 kubelet[2692]: E0124 02:41:20.660943 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 02:41:20.664191 kubelet[2692]: E0124 02:41:20.663708 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7pg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rrnz_calico-system(9f63ab66-558d-4f53-8717-746e17757652): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:20.665472 kubelet[2692]: E0124 02:41:20.665330 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:41:20.863064 containerd[1514]: time="2026-01-24T02:41:20.863012780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-db9cdc476-5b4vs,Uid:4eea0edb-bf06-4866-bb21-e6ce9438b127,Namespace:calico-system,Attempt:0,} returns sandbox id \"e562ad4aefa45625d730282730e0911140de39ac0b0089ff983b59ae9089fbc4\"" Jan 24 02:41:20.867298 containerd[1514]: time="2026-01-24T02:41:20.867248173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 02:41:20.925364 systemd-networkd[1419]: vxlan.calico: Link UP Jan 24 02:41:20.925377 systemd-networkd[1419]: vxlan.calico: Gained carrier Jan 24 02:41:21.166518 systemd-networkd[1419]: cali8e247cf4083: Gained IPv6LL Jan 24 02:41:21.181191 containerd[1514]: time="2026-01-24T02:41:21.180974989Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:21.182606 containerd[1514]: time="2026-01-24T02:41:21.182278289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 02:41:21.182606 containerd[1514]: time="2026-01-24T02:41:21.182424289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 02:41:21.191219 kubelet[2692]: E0124 02:41:21.182928 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 02:41:21.191219 kubelet[2692]: E0124 02:41:21.190024 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 02:41:21.191219 kubelet[2692]: E0124 02:41:21.190500 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2c29136013d84d03b2adb5ac5c06e984,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-956rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-db9cdc476-5b4vs_calico-system(4eea0edb-bf06-4866-bb21-e6ce9438b127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:21.195387 containerd[1514]: time="2026-01-24T02:41:21.195349932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 02:41:21.287266 kubelet[2692]: E0124 02:41:21.287147 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:41:21.287648 kubelet[2692]: E0124 02:41:21.287511 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" podUID="57ed0d28-f7e6-4e62-8d12-5c54e0de4159" Jan 24 02:41:21.507638 containerd[1514]: time="2026-01-24T02:41:21.506860441Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:21.509774 containerd[1514]: time="2026-01-24T02:41:21.508692493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 02:41:21.509774 containerd[1514]: time="2026-01-24T02:41:21.508780116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 02:41:21.510045 kubelet[2692]: E0124 02:41:21.509009 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 02:41:21.510045 kubelet[2692]: E0124 02:41:21.509091 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 02:41:21.510045 kubelet[2692]: E0124 02:41:21.509667 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-956rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-db9cdc476-5b4vs_calico-system(4eea0edb-bf06-4866-bb21-e6ce9438b127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:21.511066 kubelet[2692]: E0124 02:41:21.510965 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db9cdc476-5b4vs" podUID="4eea0edb-bf06-4866-bb21-e6ce9438b127" Jan 24 02:41:21.572126 kubelet[2692]: I0124 02:41:21.572078 2692 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28182695-8b28-4eab-884e-ccb6e32ecfc7" path="/var/lib/kubelet/pods/28182695-8b28-4eab-884e-ccb6e32ecfc7/volumes" Jan 24 02:41:22.287882 kubelet[2692]: E0124 02:41:22.287740 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db9cdc476-5b4vs" podUID="4eea0edb-bf06-4866-bb21-e6ce9438b127" Jan 24 02:41:22.446618 systemd-networkd[1419]: cali2c42bf157e9: Gained IPv6LL Jan 24 02:41:22.448411 systemd-networkd[1419]: vxlan.calico: Gained IPv6LL Jan 24 02:41:23.499541 containerd[1514]: time="2026-01-24T02:41:23.499381176Z" level=info msg="StopPodSandbox for \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\"" Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.565 [WARNING][5146] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f63ab66-558d-4f53-8717-746e17757652", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187", Pod:"csi-node-driver-8rrnz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31c5cf595f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.565 [INFO][5146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.565 [INFO][5146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" iface="eth0" netns="" Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.565 [INFO][5146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.565 [INFO][5146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.627 [INFO][5153] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" HandleID="k8s-pod-network.3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.629 [INFO][5153] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.629 [INFO][5153] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.639 [WARNING][5153] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" HandleID="k8s-pod-network.3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.639 [INFO][5153] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" HandleID="k8s-pod-network.3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.642 [INFO][5153] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:23.652048 containerd[1514]: 2026-01-24 02:41:23.648 [INFO][5146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:23.654014 containerd[1514]: time="2026-01-24T02:41:23.652109179Z" level=info msg="TearDown network for sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\" successfully" Jan 24 02:41:23.654014 containerd[1514]: time="2026-01-24T02:41:23.652140748Z" level=info msg="StopPodSandbox for \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\" returns successfully" Jan 24 02:41:23.654014 containerd[1514]: time="2026-01-24T02:41:23.652925864Z" level=info msg="RemovePodSandbox for \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\"" Jan 24 02:41:23.654014 containerd[1514]: time="2026-01-24T02:41:23.652979140Z" level=info msg="Forcibly stopping sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\"" Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.710 [WARNING][5170] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f63ab66-558d-4f53-8717-746e17757652", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"d2b77716837687c000fe61eb77d295f962a72d169b929277485972eab4130187", Pod:"csi-node-driver-8rrnz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31c5cf595f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.711 [INFO][5170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.711 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" iface="eth0" netns="" Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.711 [INFO][5170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.711 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.746 [INFO][5177] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" HandleID="k8s-pod-network.3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.746 [INFO][5177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.746 [INFO][5177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.756 [WARNING][5177] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" HandleID="k8s-pod-network.3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.756 [INFO][5177] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" HandleID="k8s-pod-network.3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Workload="srv--aqhf7.gb1.brightbox.com-k8s-csi--node--driver--8rrnz-eth0" Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.758 [INFO][5177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:23.763487 containerd[1514]: 2026-01-24 02:41:23.760 [INFO][5170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9" Jan 24 02:41:23.764343 containerd[1514]: time="2026-01-24T02:41:23.763505757Z" level=info msg="TearDown network for sandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\" successfully" Jan 24 02:41:23.781425 containerd[1514]: time="2026-01-24T02:41:23.781294309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 02:41:23.781578 containerd[1514]: time="2026-01-24T02:41:23.781435284Z" level=info msg="RemovePodSandbox \"3e8943cac9f91918c5585351db446053796cd9fc0f77fdea23bf4c18b73245f9\" returns successfully" Jan 24 02:41:23.783518 containerd[1514]: time="2026-01-24T02:41:23.782737094Z" level=info msg="StopPodSandbox for \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\"" Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.831 [WARNING][5191] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0", GenerateName:"calico-kube-controllers-7d5c647d49-", Namespace:"calico-system", SelfLink:"", UID:"9038de97-8842-48ce-8fdf-e9b5cfec0012", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d5c647d49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069", Pod:"calico-kube-controllers-7d5c647d49-zhs4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali704490c9e47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.832 [INFO][5191] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.832 [INFO][5191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" iface="eth0" netns="" Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.832 [INFO][5191] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.832 [INFO][5191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.874 [INFO][5198] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" HandleID="k8s-pod-network.a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.874 [INFO][5198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.875 [INFO][5198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.884 [WARNING][5198] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" HandleID="k8s-pod-network.a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.884 [INFO][5198] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" HandleID="k8s-pod-network.a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.886 [INFO][5198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:23.889936 containerd[1514]: 2026-01-24 02:41:23.888 [INFO][5191] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:23.891387 containerd[1514]: time="2026-01-24T02:41:23.889991330Z" level=info msg="TearDown network for sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\" successfully" Jan 24 02:41:23.891387 containerd[1514]: time="2026-01-24T02:41:23.890035130Z" level=info msg="StopPodSandbox for \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\" returns successfully" Jan 24 02:41:23.891387 containerd[1514]: time="2026-01-24T02:41:23.890856823Z" level=info msg="RemovePodSandbox for \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\"" Jan 24 02:41:23.891387 containerd[1514]: time="2026-01-24T02:41:23.890911463Z" level=info msg="Forcibly stopping sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\"" Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.940 [WARNING][5212] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0", GenerateName:"calico-kube-controllers-7d5c647d49-", Namespace:"calico-system", SelfLink:"", UID:"9038de97-8842-48ce-8fdf-e9b5cfec0012", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d5c647d49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"6dc45b684b05441d34f7c208f00299147bee131a4d95bcb8f5b49a3065538069", Pod:"calico-kube-controllers-7d5c647d49-zhs4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali704490c9e47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.941 [INFO][5212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.941 [INFO][5212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" iface="eth0" netns="" Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.941 [INFO][5212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.941 [INFO][5212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.972 [INFO][5219] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" HandleID="k8s-pod-network.a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.972 [INFO][5219] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.973 [INFO][5219] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.983 [WARNING][5219] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" HandleID="k8s-pod-network.a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.983 [INFO][5219] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" HandleID="k8s-pod-network.a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--kube--controllers--7d5c647d49--zhs4g-eth0" Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.984 [INFO][5219] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:23.988951 containerd[1514]: 2026-01-24 02:41:23.986 [INFO][5212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef" Jan 24 02:41:23.990394 containerd[1514]: time="2026-01-24T02:41:23.989022296Z" level=info msg="TearDown network for sandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\" successfully" Jan 24 02:41:23.993065 containerd[1514]: time="2026-01-24T02:41:23.992981765Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 02:41:23.993165 containerd[1514]: time="2026-01-24T02:41:23.993113842Z" level=info msg="RemovePodSandbox \"a571294e9efe613e458c7bc11d5f717b758b7714b373e089d182f44d18219fef\" returns successfully" Jan 24 02:41:23.994275 containerd[1514]: time="2026-01-24T02:41:23.994213372Z" level=info msg="StopPodSandbox for \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\"" Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.053 [WARNING][5234] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0", GenerateName:"calico-apiserver-7bcbb787c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e38edac-3735-49c2-8f05-a82f9686ac99", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcbb787c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf", Pod:"calico-apiserver-7bcbb787c9-s24s2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23bd91aa908", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.053 [INFO][5234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.053 [INFO][5234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" iface="eth0" netns="" Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.053 [INFO][5234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.053 [INFO][5234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.104 [INFO][5241] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" HandleID="k8s-pod-network.95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.105 [INFO][5241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.105 [INFO][5241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.118 [WARNING][5241] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" HandleID="k8s-pod-network.95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.118 [INFO][5241] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" HandleID="k8s-pod-network.95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.120 [INFO][5241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:24.123929 containerd[1514]: 2026-01-24 02:41:24.122 [INFO][5234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:24.124876 containerd[1514]: time="2026-01-24T02:41:24.124001876Z" level=info msg="TearDown network for sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\" successfully" Jan 24 02:41:24.124876 containerd[1514]: time="2026-01-24T02:41:24.124058719Z" level=info msg="StopPodSandbox for \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\" returns successfully" Jan 24 02:41:24.125705 containerd[1514]: time="2026-01-24T02:41:24.125193292Z" level=info msg="RemovePodSandbox for \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\"" Jan 24 02:41:24.125705 containerd[1514]: time="2026-01-24T02:41:24.125239930Z" level=info msg="Forcibly stopping sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\"" Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.180 [WARNING][5256] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0", GenerateName:"calico-apiserver-7bcbb787c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e38edac-3735-49c2-8f05-a82f9686ac99", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcbb787c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"24bed7150fc7cfc88c55bc33149371aa4229ad0153c14fb19eba1c4a841212cf", Pod:"calico-apiserver-7bcbb787c9-s24s2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23bd91aa908", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.181 [INFO][5256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.181 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" iface="eth0" netns="" Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.181 [INFO][5256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.181 [INFO][5256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.209 [INFO][5263] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" HandleID="k8s-pod-network.95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.209 [INFO][5263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.209 [INFO][5263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.219 [WARNING][5263] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" HandleID="k8s-pod-network.95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.219 [INFO][5263] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" HandleID="k8s-pod-network.95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--s24s2-eth0" Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.221 [INFO][5263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:24.225105 containerd[1514]: 2026-01-24 02:41:24.223 [INFO][5256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523" Jan 24 02:41:24.226836 containerd[1514]: time="2026-01-24T02:41:24.225928228Z" level=info msg="TearDown network for sandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\" successfully" Jan 24 02:41:24.233172 containerd[1514]: time="2026-01-24T02:41:24.233043610Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 02:41:24.233172 containerd[1514]: time="2026-01-24T02:41:24.233109149Z" level=info msg="RemovePodSandbox \"95233453b820b831d9cae7d5fbe6068c9967873e41eb44b78132801d67a3e523\" returns successfully" Jan 24 02:41:24.233830 containerd[1514]: time="2026-01-24T02:41:24.233777746Z" level=info msg="StopPodSandbox for \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\"" Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.281 [WARNING][5277] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a844f830-48b6-4d22-81b9-0c77ec1069d3", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8", Pod:"coredns-674b8bbfcf-qrxcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida1c6eb74b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.282 [INFO][5277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.282 [INFO][5277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" iface="eth0" netns="" Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.282 [INFO][5277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.282 [INFO][5277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.317 [INFO][5284] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" HandleID="k8s-pod-network.77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.318 [INFO][5284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.318 [INFO][5284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.327 [WARNING][5284] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" HandleID="k8s-pod-network.77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.327 [INFO][5284] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" HandleID="k8s-pod-network.77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.329 [INFO][5284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:24.333442 containerd[1514]: 2026-01-24 02:41:24.331 [INFO][5277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:24.334921 containerd[1514]: time="2026-01-24T02:41:24.333515315Z" level=info msg="TearDown network for sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\" successfully" Jan 24 02:41:24.334921 containerd[1514]: time="2026-01-24T02:41:24.333552085Z" level=info msg="StopPodSandbox for \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\" returns successfully" Jan 24 02:41:24.334921 containerd[1514]: time="2026-01-24T02:41:24.334295422Z" level=info msg="RemovePodSandbox for \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\"" Jan 24 02:41:24.335601 containerd[1514]: time="2026-01-24T02:41:24.335084346Z" level=info msg="Forcibly stopping sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\"" Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.384 [WARNING][5298] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a844f830-48b6-4d22-81b9-0c77ec1069d3", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"76e18167a2af239bce1075fb01a6e329dd30591cf6aa6f5c4becd83f61936af8", Pod:"coredns-674b8bbfcf-qrxcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida1c6eb74b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.386 [INFO][5298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.386 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" iface="eth0" netns="" Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.386 [INFO][5298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.386 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.417 [INFO][5305] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" HandleID="k8s-pod-network.77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.417 [INFO][5305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.417 [INFO][5305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.427 [WARNING][5305] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" HandleID="k8s-pod-network.77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.427 [INFO][5305] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" HandleID="k8s-pod-network.77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--qrxcz-eth0" Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.429 [INFO][5305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:24.434685 containerd[1514]: 2026-01-24 02:41:24.432 [INFO][5298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363" Jan 24 02:41:24.438285 containerd[1514]: time="2026-01-24T02:41:24.437295526Z" level=info msg="TearDown network for sandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\" successfully" Jan 24 02:41:24.472658 containerd[1514]: time="2026-01-24T02:41:24.472585986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 02:41:24.472775 containerd[1514]: time="2026-01-24T02:41:24.472681137Z" level=info msg="RemovePodSandbox \"77912bc18d5122154e470131d0eee5f8ffd5c0bf277aaeb1b9b0568cc58bb363\" returns successfully" Jan 24 02:41:24.473551 containerd[1514]: time="2026-01-24T02:41:24.473507758Z" level=info msg="StopPodSandbox for \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\"" Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.523 [WARNING][5320] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"21cb8328-f771-426d-aa02-0582dac338e9", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953", Pod:"coredns-674b8bbfcf-b4jj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36a0a5da7d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.524 [INFO][5320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.524 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" iface="eth0" netns="" Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.524 [INFO][5320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.524 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.553 [INFO][5327] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" HandleID="k8s-pod-network.82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.553 [INFO][5327] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.553 [INFO][5327] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.563 [WARNING][5327] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" HandleID="k8s-pod-network.82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.563 [INFO][5327] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" HandleID="k8s-pod-network.82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.565 [INFO][5327] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:24.575559 containerd[1514]: 2026-01-24 02:41:24.572 [INFO][5320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:24.577346 containerd[1514]: time="2026-01-24T02:41:24.577093626Z" level=info msg="TearDown network for sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\" successfully" Jan 24 02:41:24.577346 containerd[1514]: time="2026-01-24T02:41:24.577148081Z" level=info msg="StopPodSandbox for \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\" returns successfully" Jan 24 02:41:24.578624 containerd[1514]: time="2026-01-24T02:41:24.578579125Z" level=info msg="RemovePodSandbox for \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\"" Jan 24 02:41:24.578706 containerd[1514]: time="2026-01-24T02:41:24.578626874Z" level=info msg="Forcibly stopping sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\"" Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.652 [WARNING][5341] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"21cb8328-f771-426d-aa02-0582dac338e9", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"d7f70ca1f7e03cbfead08932fc3d63b3d07e24df0cd1d16f326667791f2c9953", Pod:"coredns-674b8bbfcf-b4jj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36a0a5da7d2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.653 [INFO][5341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.653 [INFO][5341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" iface="eth0" netns="" Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.653 [INFO][5341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.653 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.707 [INFO][5348] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" HandleID="k8s-pod-network.82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.707 [INFO][5348] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.707 [INFO][5348] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.719 [WARNING][5348] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" HandleID="k8s-pod-network.82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.719 [INFO][5348] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" HandleID="k8s-pod-network.82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Workload="srv--aqhf7.gb1.brightbox.com-k8s-coredns--674b8bbfcf--b4jj7-eth0" Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.722 [INFO][5348] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:24.729703 containerd[1514]: 2026-01-24 02:41:24.724 [INFO][5341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257" Jan 24 02:41:24.729703 containerd[1514]: time="2026-01-24T02:41:24.728596231Z" level=info msg="TearDown network for sandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\" successfully" Jan 24 02:41:24.745613 containerd[1514]: time="2026-01-24T02:41:24.745555275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 02:41:24.745715 containerd[1514]: time="2026-01-24T02:41:24.745669780Z" level=info msg="RemovePodSandbox \"82a362d8f03c2273893c6edb4988b9d1e301c96cd9f9f030c399d0c927633257\" returns successfully" Jan 24 02:41:24.747243 containerd[1514]: time="2026-01-24T02:41:24.746728084Z" level=info msg="StopPodSandbox for \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\"" Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.796 [WARNING][5363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"310e75f7-dcbf-42b8-8e1b-0553e380b8f3", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece", Pod:"goldmane-666569f655-rksn8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali724997fc00b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.797 [INFO][5363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.797 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" iface="eth0" netns="" Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.797 [INFO][5363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.797 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.827 [INFO][5370] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" HandleID="k8s-pod-network.b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.827 [INFO][5370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.827 [INFO][5370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.837 [WARNING][5370] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" HandleID="k8s-pod-network.b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.838 [INFO][5370] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" HandleID="k8s-pod-network.b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.840 [INFO][5370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:24.845083 containerd[1514]: 2026-01-24 02:41:24.842 [INFO][5363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:24.847003 containerd[1514]: time="2026-01-24T02:41:24.845483600Z" level=info msg="TearDown network for sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\" successfully" Jan 24 02:41:24.847003 containerd[1514]: time="2026-01-24T02:41:24.845539058Z" level=info msg="StopPodSandbox for \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\" returns successfully" Jan 24 02:41:24.847003 containerd[1514]: time="2026-01-24T02:41:24.846232680Z" level=info msg="RemovePodSandbox for \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\"" Jan 24 02:41:24.847003 containerd[1514]: time="2026-01-24T02:41:24.846282310Z" level=info msg="Forcibly stopping sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\"" Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.896 [WARNING][5384] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"310e75f7-dcbf-42b8-8e1b-0553e380b8f3", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"8471e41cab6031ed160c2430241329d3a17c123b456b378266e0c93dbecd8ece", Pod:"goldmane-666569f655-rksn8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali724997fc00b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.898 [INFO][5384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.898 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" iface="eth0" netns="" Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.898 [INFO][5384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.898 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.933 [INFO][5391] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" HandleID="k8s-pod-network.b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.934 [INFO][5391] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.934 [INFO][5391] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.944 [WARNING][5391] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" HandleID="k8s-pod-network.b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.944 [INFO][5391] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" HandleID="k8s-pod-network.b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Workload="srv--aqhf7.gb1.brightbox.com-k8s-goldmane--666569f655--rksn8-eth0" Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.946 [INFO][5391] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:24.950772 containerd[1514]: 2026-01-24 02:41:24.948 [INFO][5384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3" Jan 24 02:41:24.952301 containerd[1514]: time="2026-01-24T02:41:24.950796717Z" level=info msg="TearDown network for sandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\" successfully" Jan 24 02:41:24.955128 containerd[1514]: time="2026-01-24T02:41:24.955092877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 02:41:24.955241 containerd[1514]: time="2026-01-24T02:41:24.955195112Z" level=info msg="RemovePodSandbox \"b85375452377239b4ec6428867cba209b48eb517560405eeb32f8145196423e3\" returns successfully" Jan 24 02:41:24.956256 containerd[1514]: time="2026-01-24T02:41:24.955879761Z" level=info msg="StopPodSandbox for \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\"" Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.011 [WARNING][5405] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.012 [INFO][5405] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.012 [INFO][5405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" iface="eth0" netns="" Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.012 [INFO][5405] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.012 [INFO][5405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.039 [INFO][5413] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.039 [INFO][5413] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.040 [INFO][5413] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.050 [WARNING][5413] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.050 [INFO][5413] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.052 [INFO][5413] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:25.056652 containerd[1514]: 2026-01-24 02:41:25.054 [INFO][5405] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:25.057839 containerd[1514]: time="2026-01-24T02:41:25.057429368Z" level=info msg="TearDown network for sandbox \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\" successfully" Jan 24 02:41:25.057839 containerd[1514]: time="2026-01-24T02:41:25.057491157Z" level=info msg="StopPodSandbox for \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\" returns successfully" Jan 24 02:41:25.058782 containerd[1514]: time="2026-01-24T02:41:25.058377271Z" level=info msg="RemovePodSandbox for \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\"" Jan 24 02:41:25.058782 containerd[1514]: time="2026-01-24T02:41:25.058416075Z" level=info msg="Forcibly stopping sandbox \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\"" Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.113 [WARNING][5428] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.113 [INFO][5428] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.113 [INFO][5428] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" iface="eth0" netns="" Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.114 [INFO][5428] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.114 [INFO][5428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.146 [INFO][5435] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.147 [INFO][5435] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.147 [INFO][5435] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.163 [WARNING][5435] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.163 [INFO][5435] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" HandleID="k8s-pod-network.6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.166 [INFO][5435] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:25.172662 containerd[1514]: 2026-01-24 02:41:25.170 [INFO][5428] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d" Jan 24 02:41:25.174845 containerd[1514]: time="2026-01-24T02:41:25.174282018Z" level=info msg="TearDown network for sandbox \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\" successfully" Jan 24 02:41:25.184708 containerd[1514]: time="2026-01-24T02:41:25.184463431Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 02:41:25.184708 containerd[1514]: time="2026-01-24T02:41:25.184542828Z" level=info msg="RemovePodSandbox \"6bff66331087aa45544f5eb6521ebe7b3da063a3231a9f14d2b108039ca9c73d\" returns successfully" Jan 24 02:41:25.185177 containerd[1514]: time="2026-01-24T02:41:25.185122405Z" level=info msg="StopPodSandbox for \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\"" Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.234 [WARNING][5450] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.235 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.235 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" iface="eth0" netns="" Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.235 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.235 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.272 [INFO][5458] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.272 [INFO][5458] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.273 [INFO][5458] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.281 [WARNING][5458] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.282 [INFO][5458] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.284 [INFO][5458] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:25.289077 containerd[1514]: 2026-01-24 02:41:25.286 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:25.290581 containerd[1514]: time="2026-01-24T02:41:25.289132554Z" level=info msg="TearDown network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\" successfully" Jan 24 02:41:25.290581 containerd[1514]: time="2026-01-24T02:41:25.289169383Z" level=info msg="StopPodSandbox for \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\" returns successfully" Jan 24 02:41:25.290581 containerd[1514]: time="2026-01-24T02:41:25.289961250Z" level=info msg="RemovePodSandbox for \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\"" Jan 24 02:41:25.290581 containerd[1514]: time="2026-01-24T02:41:25.290000786Z" level=info msg="Forcibly stopping sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\"" Jan 24 02:41:25.315459 systemd[1]: Started sshd@14-10.230.33.130:22-159.223.6.232:46074.service - OpenSSH per-connection server daemon (159.223.6.232:46074). Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.363 [WARNING][5473] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" WorkloadEndpoint="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.364 [INFO][5473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.364 [INFO][5473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" iface="eth0" netns="" Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.364 [INFO][5473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.364 [INFO][5473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.397 [INFO][5481] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.397 [INFO][5481] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.397 [INFO][5481] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.414 [WARNING][5481] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.414 [INFO][5481] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" HandleID="k8s-pod-network.6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Workload="srv--aqhf7.gb1.brightbox.com-k8s-whisker--7f8b76b7d6--25fqp-eth0" Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.418 [INFO][5481] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:25.423039 containerd[1514]: 2026-01-24 02:41:25.421 [INFO][5473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e" Jan 24 02:41:25.424214 containerd[1514]: time="2026-01-24T02:41:25.423296620Z" level=info msg="TearDown network for sandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\" successfully" Jan 24 02:41:25.428995 containerd[1514]: time="2026-01-24T02:41:25.428913243Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 02:41:25.429696 containerd[1514]: time="2026-01-24T02:41:25.429285529Z" level=info msg="RemovePodSandbox \"6940234d81b128a3c919aa4d3f55dcf38acf1ad595328a6afc65ef04e62ac99e\" returns successfully" Jan 24 02:41:25.431126 containerd[1514]: time="2026-01-24T02:41:25.431082885Z" level=info msg="StopPodSandbox for \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\"" Jan 24 02:41:25.457922 sshd[5477]: Invalid user mysql from 159.223.6.232 port 46074 Jan 24 02:41:25.484988 sshd[5477]: Connection closed by invalid user mysql 159.223.6.232 port 46074 [preauth] Jan 24 02:41:25.489301 systemd[1]: sshd@14-10.230.33.130:22-159.223.6.232:46074.service: Deactivated successfully. Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.495 [WARNING][5497] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0", GenerateName:"calico-apiserver-7bcbb787c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"57ed0d28-f7e6-4e62-8d12-5c54e0de4159", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcbb787c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6", Pod:"calico-apiserver-7bcbb787c9-46gqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e247cf4083", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.496 [INFO][5497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.496 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" iface="eth0" netns="" Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.496 [INFO][5497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.496 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.529 [INFO][5506] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" HandleID="k8s-pod-network.89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.529 [INFO][5506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.529 [INFO][5506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.540 [WARNING][5506] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" HandleID="k8s-pod-network.89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.540 [INFO][5506] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" HandleID="k8s-pod-network.89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.543 [INFO][5506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:25.548737 containerd[1514]: 2026-01-24 02:41:25.546 [INFO][5497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:25.549533 containerd[1514]: time="2026-01-24T02:41:25.548733053Z" level=info msg="TearDown network for sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\" successfully" Jan 24 02:41:25.549533 containerd[1514]: time="2026-01-24T02:41:25.548781111Z" level=info msg="StopPodSandbox for \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\" returns successfully" Jan 24 02:41:25.553724 containerd[1514]: time="2026-01-24T02:41:25.550875369Z" level=info msg="RemovePodSandbox for \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\"" Jan 24 02:41:25.553724 containerd[1514]: time="2026-01-24T02:41:25.550921013Z" level=info msg="Forcibly stopping sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\"" Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.619 [WARNING][5520] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0", GenerateName:"calico-apiserver-7bcbb787c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"57ed0d28-f7e6-4e62-8d12-5c54e0de4159", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 2, 40, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bcbb787c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aqhf7.gb1.brightbox.com", ContainerID:"e768f00e124b88d415d3c950b35038db71858aec929ff4d9d1137d992d6061f6", Pod:"calico-apiserver-7bcbb787c9-46gqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e247cf4083", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.619 [INFO][5520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.619 [INFO][5520] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" iface="eth0" netns="" Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.619 [INFO][5520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.619 [INFO][5520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.659 [INFO][5527] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" HandleID="k8s-pod-network.89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.660 [INFO][5527] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.660 [INFO][5527] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.670 [WARNING][5527] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" HandleID="k8s-pod-network.89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.670 [INFO][5527] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" HandleID="k8s-pod-network.89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Workload="srv--aqhf7.gb1.brightbox.com-k8s-calico--apiserver--7bcbb787c9--46gqx-eth0" Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.673 [INFO][5527] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 02:41:25.677894 containerd[1514]: 2026-01-24 02:41:25.675 [INFO][5520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5" Jan 24 02:41:25.677894 containerd[1514]: time="2026-01-24T02:41:25.677601390Z" level=info msg="TearDown network for sandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\" successfully" Jan 24 02:41:25.685663 containerd[1514]: time="2026-01-24T02:41:25.685602833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 02:41:25.685770 containerd[1514]: time="2026-01-24T02:41:25.685690189Z" level=info msg="RemovePodSandbox \"89ad36273ed6afb708a3c02a078b937148f832d03cec05fd4d9978db9c3cceb5\" returns successfully" Jan 24 02:41:28.560207 containerd[1514]: time="2026-01-24T02:41:28.560126407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 02:41:28.883795 containerd[1514]: time="2026-01-24T02:41:28.882973592Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:28.885807 containerd[1514]: time="2026-01-24T02:41:28.885352772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 02:41:28.885807 containerd[1514]: time="2026-01-24T02:41:28.885437943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 02:41:28.886500 kubelet[2692]: E0124 02:41:28.886274 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:28.887232 kubelet[2692]: E0124 02:41:28.886547 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:28.887232 kubelet[2692]: E0124 02:41:28.886907 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwhhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bcbb787c9-s24s2_calico-apiserver(9e38edac-3735-49c2-8f05-a82f9686ac99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:28.889341 kubelet[2692]: E0124 02:41:28.888965 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:41:32.561074 containerd[1514]: time="2026-01-24T02:41:32.560947224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 02:41:32.873158 containerd[1514]: time="2026-01-24T02:41:32.872524816Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:32.874034 containerd[1514]: time="2026-01-24T02:41:32.873983679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 02:41:32.874450 containerd[1514]: time="2026-01-24T02:41:32.874089464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 02:41:32.874582 kubelet[2692]: E0124 02:41:32.874505 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:32.875146 kubelet[2692]: E0124 02:41:32.874641 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:32.875146 kubelet[2692]: E0124 02:41:32.874962 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtjhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bcbb787c9-46gqx_calico-apiserver(57ed0d28-f7e6-4e62-8d12-5c54e0de4159): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:32.877023 kubelet[2692]: E0124 02:41:32.876941 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" podUID="57ed0d28-f7e6-4e62-8d12-5c54e0de4159" Jan 24 02:41:33.560821 containerd[1514]: time="2026-01-24T02:41:33.560492499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 02:41:33.882356 containerd[1514]: time="2026-01-24T02:41:33.882044839Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:33.886190 containerd[1514]: time="2026-01-24T02:41:33.886123858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 02:41:33.886287 containerd[1514]: time="2026-01-24T02:41:33.886230782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 02:41:33.886634 kubelet[2692]: E0124 02:41:33.886558 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 02:41:33.887070 kubelet[2692]: E0124 02:41:33.886645 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 02:41:33.887070 kubelet[2692]: E0124 02:41:33.886885 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rksn8_calico-system(310e75f7-dcbf-42b8-8e1b-0553e380b8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:33.888565 kubelet[2692]: E0124 02:41:33.888506 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:41:34.560412 containerd[1514]: time="2026-01-24T02:41:34.560024646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 02:41:34.872135 containerd[1514]: time="2026-01-24T02:41:34.871543248Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:34.873399 containerd[1514]: time="2026-01-24T02:41:34.873182594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 02:41:34.873399 containerd[1514]: time="2026-01-24T02:41:34.873292150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 02:41:34.873930 kubelet[2692]: E0124 02:41:34.873714 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 02:41:34.873930 kubelet[2692]: E0124 02:41:34.873822 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 02:41:34.875387 kubelet[2692]: E0124 02:41:34.874632 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2c29136013d84d03b2adb5ac5c06e984,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-956rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-db9cdc476-5b4vs_calico-system(4eea0edb-bf06-4866-bb21-e6ce9438b127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:34.876144 containerd[1514]: time="2026-01-24T02:41:34.874893748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 02:41:35.198926 containerd[1514]: time="2026-01-24T02:41:35.198666357Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:35.200531 containerd[1514]: time="2026-01-24T02:41:35.200412811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 02:41:35.200531 containerd[1514]: time="2026-01-24T02:41:35.200453505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 02:41:35.200782 kubelet[2692]: E0124 02:41:35.200718 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 02:41:35.201650 kubelet[2692]: E0124 02:41:35.200817 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 02:41:35.204678 containerd[1514]: time="2026-01-24T02:41:35.201957082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 02:41:35.204795 kubelet[2692]: E0124 02:41:35.201219 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7pg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rrnz_calico-system(9f63ab66-558d-4f53-8717-746e17757652): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:35.524221 containerd[1514]: time="2026-01-24T02:41:35.524043482Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:35.526102 containerd[1514]: time="2026-01-24T02:41:35.526047220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 02:41:35.526375 containerd[1514]: time="2026-01-24T02:41:35.526064628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 02:41:35.526479 kubelet[2692]: E0124 02:41:35.526399 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 02:41:35.526585 kubelet[2692]: E0124 02:41:35.526525 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 02:41:35.527427 kubelet[2692]: E0124 02:41:35.527102 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dbhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d5c647d49-zhs4g_calico-system(9038de97-8842-48ce-8fdf-e9b5cfec0012): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:35.529022 containerd[1514]: time="2026-01-24T02:41:35.527162275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 02:41:35.529081 kubelet[2692]: E0124 02:41:35.528782 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" podUID="9038de97-8842-48ce-8fdf-e9b5cfec0012" Jan 24 02:41:35.844138 containerd[1514]: time="2026-01-24T02:41:35.844068020Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:35.845234 containerd[1514]: time="2026-01-24T02:41:35.845100534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 02:41:35.845234 containerd[1514]: time="2026-01-24T02:41:35.845131100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 02:41:35.845460 kubelet[2692]: E0124 02:41:35.845372 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 02:41:35.845543 kubelet[2692]: E0124 02:41:35.845459 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 02:41:35.846162 containerd[1514]: time="2026-01-24T02:41:35.845886399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 02:41:35.846453 kubelet[2692]: E0124 02:41:35.846390 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-956rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-db9cdc476-5b4vs_calico-system(4eea0edb-bf06-4866-bb21-e6ce9438b127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:35.848236 kubelet[2692]: E0124 02:41:35.847757 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db9cdc476-5b4vs" podUID="4eea0edb-bf06-4866-bb21-e6ce9438b127" Jan 24 02:41:36.164519 containerd[1514]: time="2026-01-24T02:41:36.164360872Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:36.165806 containerd[1514]: time="2026-01-24T02:41:36.165756218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 02:41:36.165981 containerd[1514]: time="2026-01-24T02:41:36.165768821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 02:41:36.166285 kubelet[2692]: E0124 02:41:36.166230 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 02:41:36.166512 kubelet[2692]: E0124 02:41:36.166296 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 02:41:36.166589 kubelet[2692]: E0124 02:41:36.166508 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7pg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rrnz_calico-system(9f63ab66-558d-4f53-8717-746e17757652): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:36.167815 kubelet[2692]: E0124 02:41:36.167772 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:41:40.816813 systemd[1]: Started sshd@15-10.230.33.130:22-20.161.92.111:48524.service - OpenSSH per-connection server daemon (20.161.92.111:48524). Jan 24 02:41:41.389651 sshd[5556]: Accepted publickey for core from 20.161.92.111 port 48524 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:41:41.393691 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:41:41.410805 systemd-logind[1491]: New session 10 of user core. Jan 24 02:41:41.416559 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 02:41:41.563090 kubelet[2692]: E0124 02:41:41.563032 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:41:42.416229 sshd[5556]: pam_unix(sshd:session): session closed for user core Jan 24 02:41:42.430653 systemd[1]: sshd@15-10.230.33.130:22-20.161.92.111:48524.service: Deactivated successfully. Jan 24 02:41:42.434284 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 02:41:42.436671 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Jan 24 02:41:42.438681 systemd-logind[1491]: Removed session 10. Jan 24 02:41:45.559698 kubelet[2692]: E0124 02:41:45.559121 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" podUID="57ed0d28-f7e6-4e62-8d12-5c54e0de4159" Jan 24 02:41:47.519715 systemd[1]: Started sshd@16-10.230.33.130:22-20.161.92.111:41270.service - OpenSSH per-connection server daemon (20.161.92.111:41270). Jan 24 02:41:47.565959 kubelet[2692]: E0124 02:41:47.565279 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" podUID="9038de97-8842-48ce-8fdf-e9b5cfec0012" Jan 24 02:41:47.565959 kubelet[2692]: E0124 02:41:47.565800 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:41:48.133078 sshd[5601]: Accepted publickey for core from 20.161.92.111 port 41270 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:41:48.135602 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:41:48.146114 systemd-logind[1491]: New session 11 of user core. Jan 24 02:41:48.150545 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 02:41:48.562284 kubelet[2692]: E0124 02:41:48.562205 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:41:48.729222 sshd[5601]: pam_unix(sshd:session): session closed for user core Jan 24 02:41:48.734096 systemd[1]: sshd@16-10.230.33.130:22-20.161.92.111:41270.service: Deactivated successfully. Jan 24 02:41:48.738216 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 02:41:48.740840 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Jan 24 02:41:48.742637 systemd-logind[1491]: Removed session 11. Jan 24 02:41:49.318635 systemd[1]: Started sshd@17-10.230.33.130:22-157.245.70.174:41738.service - OpenSSH per-connection server daemon (157.245.70.174:41738). Jan 24 02:41:49.450108 sshd[5617]: Connection closed by authenticating user root 157.245.70.174 port 41738 [preauth] Jan 24 02:41:49.453096 systemd[1]: sshd@17-10.230.33.130:22-157.245.70.174:41738.service: Deactivated successfully. Jan 24 02:41:49.563282 kubelet[2692]: E0124 02:41:49.563210 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db9cdc476-5b4vs" podUID="4eea0edb-bf06-4866-bb21-e6ce9438b127" Jan 24 02:41:53.839684 systemd[1]: Started sshd@18-10.230.33.130:22-20.161.92.111:41790.service - OpenSSH per-connection server daemon (20.161.92.111:41790). Jan 24 02:41:54.514537 sshd[5624]: Accepted publickey for core from 20.161.92.111 port 41790 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:41:54.517251 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:41:54.526383 systemd-logind[1491]: New session 12 of user core. Jan 24 02:41:54.531502 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 02:41:55.140986 sshd[5624]: pam_unix(sshd:session): session closed for user core Jan 24 02:41:55.147783 systemd[1]: sshd@18-10.230.33.130:22-20.161.92.111:41790.service: Deactivated successfully. Jan 24 02:41:55.150919 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 02:41:55.153170 systemd-logind[1491]: Session 12 logged out. Waiting for processes to exit. Jan 24 02:41:55.156780 systemd-logind[1491]: Removed session 12. Jan 24 02:41:55.252672 systemd[1]: Started sshd@19-10.230.33.130:22-20.161.92.111:41798.service - OpenSSH per-connection server daemon (20.161.92.111:41798). Jan 24 02:41:55.562582 containerd[1514]: time="2026-01-24T02:41:55.562478011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 02:41:55.824853 sshd[5637]: Accepted publickey for core from 20.161.92.111 port 41798 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:41:55.826737 sshd[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:41:55.834628 systemd-logind[1491]: New session 13 of user core. Jan 24 02:41:55.843538 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 02:41:55.916057 containerd[1514]: time="2026-01-24T02:41:55.916000165Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:55.917435 containerd[1514]: time="2026-01-24T02:41:55.917355728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 02:41:55.917525 containerd[1514]: time="2026-01-24T02:41:55.917471810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 02:41:55.917920 kubelet[2692]: E0124 02:41:55.917822 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:55.919387 kubelet[2692]: E0124 02:41:55.917962 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:55.919387 kubelet[2692]: E0124 02:41:55.918447 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwhhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bcbb787c9-s24s2_calico-apiserver(9e38edac-3735-49c2-8f05-a82f9686ac99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:55.919892 kubelet[2692]: E0124 02:41:55.919848 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:41:56.406740 sshd[5637]: pam_unix(sshd:session): session closed for user core Jan 24 02:41:56.412072 systemd[1]: sshd@19-10.230.33.130:22-20.161.92.111:41798.service: Deactivated successfully. Jan 24 02:41:56.412839 systemd-logind[1491]: Session 13 logged out. Waiting for processes to exit. Jan 24 02:41:56.416072 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 02:41:56.419038 systemd-logind[1491]: Removed session 13. Jan 24 02:41:56.517638 systemd[1]: Started sshd@20-10.230.33.130:22-20.161.92.111:41806.service - OpenSSH per-connection server daemon (20.161.92.111:41806). Jan 24 02:41:57.140221 sshd[5648]: Accepted publickey for core from 20.161.92.111 port 41806 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:41:57.143025 sshd[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:41:57.157838 systemd-logind[1491]: New session 14 of user core. Jan 24 02:41:57.165769 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 02:41:57.673196 sshd[5648]: pam_unix(sshd:session): session closed for user core Jan 24 02:41:57.679471 systemd[1]: sshd@20-10.230.33.130:22-20.161.92.111:41806.service: Deactivated successfully. Jan 24 02:41:57.683569 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 02:41:57.685550 systemd-logind[1491]: Session 14 logged out. Waiting for processes to exit. Jan 24 02:41:57.687372 systemd-logind[1491]: Removed session 14. Jan 24 02:41:59.561161 containerd[1514]: time="2026-01-24T02:41:59.560293226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 02:41:59.875390 containerd[1514]: time="2026-01-24T02:41:59.875063917Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:41:59.883677 containerd[1514]: time="2026-01-24T02:41:59.883570486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 02:41:59.883806 containerd[1514]: time="2026-01-24T02:41:59.883712022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 02:41:59.883976 kubelet[2692]: E0124 02:41:59.883912 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:59.885492 kubelet[2692]: E0124 02:41:59.883985 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:41:59.885492 kubelet[2692]: E0124 02:41:59.884303 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtjhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bcbb787c9-46gqx_calico-apiserver(57ed0d28-f7e6-4e62-8d12-5c54e0de4159): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 02:41:59.886370 kubelet[2692]: E0124 02:41:59.886040 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" podUID="57ed0d28-f7e6-4e62-8d12-5c54e0de4159" Jan 24 02:41:59.886472 containerd[1514]: time="2026-01-24T02:41:59.886065070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 02:42:00.194932 containerd[1514]: time="2026-01-24T02:42:00.194704396Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:42:00.200873 containerd[1514]: time="2026-01-24T02:42:00.200809338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 02:42:00.201013 containerd[1514]: time="2026-01-24T02:42:00.200934320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 02:42:00.201402 kubelet[2692]: E0124 02:42:00.201307 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 02:42:00.201402 kubelet[2692]: E0124 02:42:00.201401 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 02:42:00.201993 kubelet[2692]: E0124 02:42:00.201748 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kpww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rksn8_calico-system(310e75f7-dcbf-42b8-8e1b-0553e380b8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 02:42:00.203738 kubelet[2692]: E0124 02:42:00.203666 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:42:01.562361 containerd[1514]: time="2026-01-24T02:42:01.561239548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 02:42:01.883507 containerd[1514]: time="2026-01-24T02:42:01.883283179Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:42:01.884916 containerd[1514]: time="2026-01-24T02:42:01.884746753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 02:42:01.884916 containerd[1514]: time="2026-01-24T02:42:01.884849505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 02:42:01.885175 kubelet[2692]: E0124 02:42:01.885115 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 02:42:01.886969 kubelet[2692]: E0124 02:42:01.885189 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 02:42:01.886969 kubelet[2692]: E0124 02:42:01.885662 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dbhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d5c647d49-zhs4g_calico-system(9038de97-8842-48ce-8fdf-e9b5cfec0012): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 02:42:01.887963 containerd[1514]: time="2026-01-24T02:42:01.885632623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 02:42:01.889045 kubelet[2692]: E0124 02:42:01.888144 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" podUID="9038de97-8842-48ce-8fdf-e9b5cfec0012" Jan 24 02:42:02.192017 containerd[1514]: time="2026-01-24T02:42:02.191804565Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:42:02.194139 containerd[1514]: time="2026-01-24T02:42:02.193846071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 02:42:02.194139 containerd[1514]: time="2026-01-24T02:42:02.193925458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 02:42:02.194307 kubelet[2692]: E0124 02:42:02.194243 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 02:42:02.194417 kubelet[2692]: E0124 02:42:02.194333 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 02:42:02.195835 kubelet[2692]: E0124 02:42:02.194534 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7pg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rrnz_calico-system(9f63ab66-558d-4f53-8717-746e17757652): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 02:42:02.196989 containerd[1514]: time="2026-01-24T02:42:02.196941242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 02:42:02.512682 containerd[1514]: time="2026-01-24T02:42:02.512300626Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:42:02.513975 containerd[1514]: time="2026-01-24T02:42:02.513879803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 02:42:02.514245 containerd[1514]: time="2026-01-24T02:42:02.514120551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 02:42:02.514704 kubelet[2692]: E0124 02:42:02.514638 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 02:42:02.514804 kubelet[2692]: E0124 02:42:02.514723 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 02:42:02.514978 kubelet[2692]: E0124 02:42:02.514908 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7pg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rrnz_calico-system(9f63ab66-558d-4f53-8717-746e17757652): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 02:42:02.516368 kubelet[2692]: E0124 02:42:02.516278 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:42:02.786784 systemd[1]: Started sshd@21-10.230.33.130:22-20.161.92.111:56030.service - OpenSSH per-connection server daemon (20.161.92.111:56030). Jan 24 02:42:03.350473 sshd[5676]: Accepted publickey for core from 20.161.92.111 port 56030 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:42:03.353220 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:42:03.361312 systemd-logind[1491]: New session 15 of user core. Jan 24 02:42:03.366675 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 02:42:03.561021 containerd[1514]: time="2026-01-24T02:42:03.560960235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 02:42:03.854165 sshd[5676]: pam_unix(sshd:session): session closed for user core Jan 24 02:42:03.858100 systemd[1]: sshd@21-10.230.33.130:22-20.161.92.111:56030.service: Deactivated successfully. Jan 24 02:42:03.860965 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 02:42:03.863094 systemd-logind[1491]: Session 15 logged out. Waiting for processes to exit. Jan 24 02:42:03.864753 systemd-logind[1491]: Removed session 15. Jan 24 02:42:03.879763 containerd[1514]: time="2026-01-24T02:42:03.879678603Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:42:03.880988 containerd[1514]: time="2026-01-24T02:42:03.880945500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 02:42:03.881383 containerd[1514]: time="2026-01-24T02:42:03.881043182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 02:42:03.881809 kubelet[2692]: E0124 02:42:03.881586 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 02:42:03.881809 kubelet[2692]: E0124 02:42:03.881675 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 02:42:03.882848 kubelet[2692]: E0124 02:42:03.882514 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2c29136013d84d03b2adb5ac5c06e984,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-956rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-db9cdc476-5b4vs_calico-system(4eea0edb-bf06-4866-bb21-e6ce9438b127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 02:42:03.885946 containerd[1514]: time="2026-01-24T02:42:03.885096764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 02:42:04.192089 containerd[1514]: time="2026-01-24T02:42:04.191883659Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:42:04.193521 containerd[1514]: time="2026-01-24T02:42:04.193464564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 02:42:04.193718 containerd[1514]: time="2026-01-24T02:42:04.193494322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 02:42:04.193872 kubelet[2692]: E0124 02:42:04.193799 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 02:42:04.193977 kubelet[2692]: E0124 02:42:04.193887 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 02:42:04.194648 kubelet[2692]: E0124 02:42:04.194094 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-956rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-db9cdc476-5b4vs_calico-system(4eea0edb-bf06-4866-bb21-e6ce9438b127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 02:42:04.195976 kubelet[2692]: E0124 02:42:04.195936 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db9cdc476-5b4vs" podUID="4eea0edb-bf06-4866-bb21-e6ce9438b127" Jan 24 02:42:07.564875 kubelet[2692]: E0124 02:42:07.563207 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:42:08.960857 systemd[1]: Started sshd@22-10.230.33.130:22-20.161.92.111:56042.service - OpenSSH per-connection server daemon (20.161.92.111:56042). Jan 24 02:42:09.525749 sshd[5689]: Accepted publickey for core from 20.161.92.111 port 56042 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:42:09.527996 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:42:09.535083 systemd-logind[1491]: New session 16 of user core. Jan 24 02:42:09.540672 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 02:42:10.021868 sshd[5689]: pam_unix(sshd:session): session closed for user core Jan 24 02:42:10.027529 systemd[1]: sshd@22-10.230.33.130:22-20.161.92.111:56042.service: Deactivated successfully. Jan 24 02:42:10.029999 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 02:42:10.031255 systemd-logind[1491]: Session 16 logged out. Waiting for processes to exit. Jan 24 02:42:10.033066 systemd-logind[1491]: Removed session 16. Jan 24 02:42:10.139778 systemd[1]: Started sshd@23-10.230.33.130:22-159.223.6.232:57896.service - OpenSSH per-connection server daemon (159.223.6.232:57896). Jan 24 02:42:10.234947 sshd[5702]: Invalid user mysql from 159.223.6.232 port 57896 Jan 24 02:42:10.250112 sshd[5702]: Connection closed by invalid user mysql 159.223.6.232 port 57896 [preauth] Jan 24 02:42:10.252345 systemd[1]: sshd@23-10.230.33.130:22-159.223.6.232:57896.service: Deactivated successfully. Jan 24 02:42:12.559625 kubelet[2692]: E0124 02:42:12.559460 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:42:13.561892 kubelet[2692]: E0124 02:42:13.560729 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" podUID="9038de97-8842-48ce-8fdf-e9b5cfec0012" Jan 24 02:42:13.564683 kubelet[2692]: E0124 02:42:13.564421 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:42:14.560787 kubelet[2692]: E0124 02:42:14.560297 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" podUID="57ed0d28-f7e6-4e62-8d12-5c54e0de4159" Jan 24 02:42:15.141702 systemd[1]: Started sshd@24-10.230.33.130:22-20.161.92.111:52836.service - OpenSSH per-connection server daemon (20.161.92.111:52836). Jan 24 02:42:15.773868 sshd[5707]: Accepted publickey for core from 20.161.92.111 port 52836 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:42:15.782507 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:42:15.797343 systemd-logind[1491]: New session 17 of user core. Jan 24 02:42:15.803545 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 02:42:16.236480 systemd[1]: run-containerd-runc-k8s.io-146cd365c25a3b91898e97f48721481f2b2544d660e922a4981d0559937d7373-runc.TtHFtC.mount: Deactivated successfully. Jan 24 02:42:16.338690 update_engine[1497]: I20260124 02:42:16.336162 1497 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 24 02:42:16.338690 update_engine[1497]: I20260124 02:42:16.336283 1497 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 24 02:42:16.341911 update_engine[1497]: I20260124 02:42:16.338753 1497 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 24 02:42:16.341911 update_engine[1497]: I20260124 02:42:16.339951 1497 omaha_request_params.cc:62] Current group set to lts Jan 24 02:42:16.350549 update_engine[1497]: I20260124 02:42:16.345479 1497 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 24 02:42:16.350549 update_engine[1497]: I20260124 02:42:16.345523 1497 update_attempter.cc:643] Scheduling an action processor start. Jan 24 02:42:16.350549 update_engine[1497]: I20260124 02:42:16.345556 1497 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 02:42:16.350549 update_engine[1497]: I20260124 02:42:16.345641 1497 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 24 02:42:16.350549 update_engine[1497]: I20260124 02:42:16.345757 1497 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 02:42:16.350549 update_engine[1497]: I20260124 02:42:16.345777 1497 omaha_request_action.cc:272] Request: Jan 24 02:42:16.350549 update_engine[1497]: Jan 24 02:42:16.350549 update_engine[1497]: Jan 24 02:42:16.350549 update_engine[1497]: Jan 24 02:42:16.350549 update_engine[1497]: Jan 24 02:42:16.350549 update_engine[1497]: Jan 24 02:42:16.350549 update_engine[1497]: Jan 24 02:42:16.350549 update_engine[1497]: Jan 24 02:42:16.350549 update_engine[1497]: Jan 24 02:42:16.350549 update_engine[1497]: I20260124 02:42:16.345789 1497 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 02:42:16.365816 update_engine[1497]: I20260124 02:42:16.363590 1497 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 02:42:16.365816 update_engine[1497]: I20260124 02:42:16.363976 1497 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 02:42:16.384184 update_engine[1497]: E20260124 02:42:16.375456 1497 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 02:42:16.384184 update_engine[1497]: I20260124 02:42:16.375594 1497 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 24 02:42:16.409520 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 24 02:42:16.562868 kubelet[2692]: E0124 02:42:16.562780 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db9cdc476-5b4vs" podUID="4eea0edb-bf06-4866-bb21-e6ce9438b127" Jan 24 02:42:16.637636 sshd[5707]: pam_unix(sshd:session): session closed for user core Jan 24 02:42:16.647968 systemd[1]: sshd@24-10.230.33.130:22-20.161.92.111:52836.service: Deactivated successfully. Jan 24 02:42:16.654918 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 02:42:16.658875 systemd-logind[1491]: Session 17 logged out. Waiting for processes to exit. Jan 24 02:42:16.660878 systemd-logind[1491]: Removed session 17. Jan 24 02:42:16.748411 systemd[1]: Started sshd@25-10.230.33.130:22-20.161.92.111:52838.service - OpenSSH per-connection server daemon (20.161.92.111:52838). Jan 24 02:42:17.351486 sshd[5744]: Accepted publickey for core from 20.161.92.111 port 52838 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:42:17.353794 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:42:17.362229 systemd-logind[1491]: New session 18 of user core. Jan 24 02:42:17.368700 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 02:42:18.330435 sshd[5744]: pam_unix(sshd:session): session closed for user core Jan 24 02:42:18.334423 systemd-logind[1491]: Session 18 logged out. Waiting for processes to exit. Jan 24 02:42:18.335433 systemd[1]: sshd@25-10.230.33.130:22-20.161.92.111:52838.service: Deactivated successfully. Jan 24 02:42:18.339313 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 02:42:18.342502 systemd-logind[1491]: Removed session 18. Jan 24 02:42:18.436200 systemd[1]: Started sshd@26-10.230.33.130:22-20.161.92.111:52844.service - OpenSSH per-connection server daemon (20.161.92.111:52844). Jan 24 02:42:19.055210 sshd[5755]: Accepted publickey for core from 20.161.92.111 port 52844 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:42:19.057455 sshd[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:42:19.064079 systemd-logind[1491]: New session 19 of user core. Jan 24 02:42:19.073551 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 02:42:20.559810 sshd[5755]: pam_unix(sshd:session): session closed for user core Jan 24 02:42:20.567516 systemd-logind[1491]: Session 19 logged out. Waiting for processes to exit. Jan 24 02:42:20.568848 systemd[1]: sshd@26-10.230.33.130:22-20.161.92.111:52844.service: Deactivated successfully. Jan 24 02:42:20.573911 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 02:42:20.579267 systemd-logind[1491]: Removed session 19. Jan 24 02:42:20.668597 systemd[1]: Started sshd@27-10.230.33.130:22-20.161.92.111:52858.service - OpenSSH per-connection server daemon (20.161.92.111:52858). Jan 24 02:42:21.281744 sshd[5772]: Accepted publickey for core from 20.161.92.111 port 52858 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:42:21.283861 sshd[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:42:21.291996 systemd-logind[1491]: New session 20 of user core. Jan 24 02:42:21.297574 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 02:42:21.523742 systemd[1]: Started sshd@28-10.230.33.130:22-157.245.70.174:48966.service - OpenSSH per-connection server daemon (157.245.70.174:48966). Jan 24 02:42:21.644082 sshd[5778]: Connection closed by authenticating user root 157.245.70.174 port 48966 [preauth] Jan 24 02:42:21.646851 systemd[1]: sshd@28-10.230.33.130:22-157.245.70.174:48966.service: Deactivated successfully. Jan 24 02:42:22.268178 sshd[5772]: pam_unix(sshd:session): session closed for user core Jan 24 02:42:22.273337 systemd[1]: sshd@27-10.230.33.130:22-20.161.92.111:52858.service: Deactivated successfully. Jan 24 02:42:22.276467 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 02:42:22.278860 systemd-logind[1491]: Session 20 logged out. Waiting for processes to exit. Jan 24 02:42:22.281949 systemd-logind[1491]: Removed session 20. Jan 24 02:42:22.374346 systemd[1]: Started sshd@29-10.230.33.130:22-20.161.92.111:52866.service - OpenSSH per-connection server daemon (20.161.92.111:52866). Jan 24 02:42:22.569273 kubelet[2692]: E0124 02:42:22.568462 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:42:22.979451 sshd[5790]: Accepted publickey for core from 20.161.92.111 port 52866 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:42:22.981631 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:42:22.990766 systemd-logind[1491]: New session 21 of user core. Jan 24 02:42:22.997754 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 02:42:23.518491 sshd[5790]: pam_unix(sshd:session): session closed for user core Jan 24 02:42:23.528044 systemd[1]: sshd@29-10.230.33.130:22-20.161.92.111:52866.service: Deactivated successfully. Jan 24 02:42:23.528538 systemd-logind[1491]: Session 21 logged out. Waiting for processes to exit. Jan 24 02:42:23.533280 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 02:42:23.537855 systemd-logind[1491]: Removed session 21. Jan 24 02:42:24.559810 kubelet[2692]: E0124 02:42:24.559696 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:42:24.560567 kubelet[2692]: E0124 02:42:24.560102 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:42:26.234463 update_engine[1497]: I20260124 02:42:26.234368 1497 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 02:42:26.235500 update_engine[1497]: I20260124 02:42:26.234797 1497 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 02:42:26.235500 update_engine[1497]: I20260124 02:42:26.235107 1497 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 02:42:26.236641 update_engine[1497]: E20260124 02:42:26.236590 1497 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 02:42:26.236741 update_engine[1497]: I20260124 02:42:26.236670 1497 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 24 02:42:27.560789 kubelet[2692]: E0124 02:42:27.559596 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" podUID="9038de97-8842-48ce-8fdf-e9b5cfec0012" Jan 24 02:42:27.561387 kubelet[2692]: E0124 02:42:27.560812 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" podUID="57ed0d28-f7e6-4e62-8d12-5c54e0de4159" Jan 24 02:42:28.624248 systemd[1]: Started sshd@30-10.230.33.130:22-20.161.92.111:40364.service - OpenSSH per-connection server daemon (20.161.92.111:40364). Jan 24 02:42:29.209374 sshd[5804]: Accepted publickey for core from 20.161.92.111 port 40364 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:42:29.210305 sshd[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:42:29.219250 systemd-logind[1491]: New session 22 of user core. Jan 24 02:42:29.226737 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 02:42:29.564751 kubelet[2692]: E0124 02:42:29.564662 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db9cdc476-5b4vs" podUID="4eea0edb-bf06-4866-bb21-e6ce9438b127" Jan 24 02:42:29.757682 sshd[5804]: pam_unix(sshd:session): session closed for user core Jan 24 02:42:29.765169 systemd[1]: sshd@30-10.230.33.130:22-20.161.92.111:40364.service: Deactivated successfully. Jan 24 02:42:29.769481 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 02:42:29.775145 systemd-logind[1491]: Session 22 logged out. Waiting for processes to exit. Jan 24 02:42:29.776584 systemd-logind[1491]: Removed session 22. Jan 24 02:42:33.561163 kubelet[2692]: E0124 02:42:33.560761 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:42:34.872505 systemd[1]: Started sshd@31-10.230.33.130:22-20.161.92.111:38696.service - OpenSSH per-connection server daemon (20.161.92.111:38696). Jan 24 02:42:35.461375 sshd[5820]: Accepted publickey for core from 20.161.92.111 port 38696 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:42:35.464146 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:42:35.475582 systemd-logind[1491]: New session 23 of user core. Jan 24 02:42:35.482572 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 02:42:35.961807 sshd[5820]: pam_unix(sshd:session): session closed for user core Jan 24 02:42:35.972252 systemd[1]: sshd@31-10.230.33.130:22-20.161.92.111:38696.service: Deactivated successfully. Jan 24 02:42:35.972507 systemd-logind[1491]: Session 23 logged out. Waiting for processes to exit. Jan 24 02:42:35.976459 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 02:42:35.978079 systemd-logind[1491]: Removed session 23. Jan 24 02:42:36.233901 update_engine[1497]: I20260124 02:42:36.232138 1497 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 02:42:36.233901 update_engine[1497]: I20260124 02:42:36.233479 1497 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 02:42:36.233901 update_engine[1497]: I20260124 02:42:36.233822 1497 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 02:42:36.234839 update_engine[1497]: E20260124 02:42:36.234792 1497 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 02:42:36.234993 update_engine[1497]: I20260124 02:42:36.234962 1497 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 24 02:42:36.560355 kubelet[2692]: E0124 02:42:36.560242 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rksn8" podUID="310e75f7-dcbf-42b8-8e1b-0553e380b8f3" Jan 24 02:42:38.563360 kubelet[2692]: E0124 02:42:38.562688 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-46gqx" podUID="57ed0d28-f7e6-4e62-8d12-5c54e0de4159" Jan 24 02:42:38.565247 kubelet[2692]: E0124 02:42:38.562688 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rrnz" podUID="9f63ab66-558d-4f53-8717-746e17757652" Jan 24 02:42:40.561592 kubelet[2692]: E0124 02:42:40.560674 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d5c647d49-zhs4g" podUID="9038de97-8842-48ce-8fdf-e9b5cfec0012" Jan 24 02:42:41.074290 systemd[1]: Started sshd@32-10.230.33.130:22-20.161.92.111:38706.service - OpenSSH per-connection server daemon (20.161.92.111:38706). Jan 24 02:42:41.660433 sshd[5833]: Accepted publickey for core from 20.161.92.111 port 38706 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:42:41.665560 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:42:41.681546 systemd-logind[1491]: New session 24 of user core. Jan 24 02:42:41.687008 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 02:42:42.218646 sshd[5833]: pam_unix(sshd:session): session closed for user core Jan 24 02:42:42.225091 systemd-logind[1491]: Session 24 logged out. Waiting for processes to exit. Jan 24 02:42:42.226457 systemd[1]: sshd@32-10.230.33.130:22-20.161.92.111:38706.service: Deactivated successfully. Jan 24 02:42:42.230913 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 02:42:42.235249 systemd-logind[1491]: Removed session 24. Jan 24 02:42:42.562758 kubelet[2692]: E0124 02:42:42.562674 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db9cdc476-5b4vs" podUID="4eea0edb-bf06-4866-bb21-e6ce9438b127" Jan 24 02:42:44.561527 containerd[1514]: time="2026-01-24T02:42:44.561432815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 02:42:44.879106 containerd[1514]: time="2026-01-24T02:42:44.878953654Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 02:42:44.880150 containerd[1514]: time="2026-01-24T02:42:44.880028086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 02:42:44.880520 containerd[1514]: time="2026-01-24T02:42:44.880245865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 02:42:44.881681 kubelet[2692]: E0124 02:42:44.881458 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:42:44.881681 kubelet[2692]: E0124 02:42:44.881573 2692 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 02:42:44.883397 kubelet[2692]: E0124 02:42:44.882774 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwhhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bcbb787c9-s24s2_calico-apiserver(9e38edac-3735-49c2-8f05-a82f9686ac99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 02:42:44.884859 kubelet[2692]: E0124 02:42:44.884784 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bcbb787c9-s24s2" podUID="9e38edac-3735-49c2-8f05-a82f9686ac99" Jan 24 02:42:46.236255 update_engine[1497]: I20260124 02:42:46.236114 1497 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 02:42:46.238334 update_engine[1497]: I20260124 02:42:46.237497 1497 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 02:42:46.238334 update_engine[1497]: I20260124 02:42:46.237933 1497 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 02:42:46.240639 update_engine[1497]: E20260124 02:42:46.240593 1497 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 02:42:46.240745 update_engine[1497]: I20260124 02:42:46.240679 1497 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 02:42:46.240745 update_engine[1497]: I20260124 02:42:46.240708 1497 omaha_request_action.cc:617] Omaha request response: Jan 24 02:42:46.240921 update_engine[1497]: E20260124 02:42:46.240886 1497 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 24 02:42:46.245261 update_engine[1497]: I20260124 02:42:46.245218 1497 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 24 02:42:46.245261 update_engine[1497]: I20260124 02:42:46.245251 1497 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 02:42:46.245261 update_engine[1497]: I20260124 02:42:46.245265 1497 update_attempter.cc:306] Processing Done. Jan 24 02:42:46.248875 update_engine[1497]: E20260124 02:42:46.248833 1497 update_attempter.cc:619] Update failed. Jan 24 02:42:46.248875 update_engine[1497]: I20260124 02:42:46.248871 1497 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 24 02:42:46.248997 update_engine[1497]: I20260124 02:42:46.248886 1497 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 24 02:42:46.248997 update_engine[1497]: I20260124 02:42:46.248898 1497 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 24 02:42:46.249791 update_engine[1497]: I20260124 02:42:46.249738 1497 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 02:42:46.249856 update_engine[1497]: I20260124 02:42:46.249831 1497 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 02:42:46.249856 update_engine[1497]: I20260124 02:42:46.249850 1497 omaha_request_action.cc:272] Request: Jan 24 02:42:46.249856 update_engine[1497]: Jan 24 02:42:46.249856 update_engine[1497]: Jan 24 02:42:46.249856 update_engine[1497]: Jan 24 02:42:46.249856 update_engine[1497]: Jan 24 02:42:46.249856 update_engine[1497]: Jan 24 02:42:46.249856 update_engine[1497]: Jan 24 02:42:46.250182 update_engine[1497]: I20260124 02:42:46.249861 1497 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 02:42:46.250182 update_engine[1497]: I20260124 02:42:46.250118 1497 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 02:42:46.253057 update_engine[1497]: I20260124 02:42:46.252717 1497 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 02:42:46.253057 update_engine[1497]: E20260124 02:42:46.252837 1497 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 02:42:46.253057 update_engine[1497]: I20260124 02:42:46.252889 1497 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 02:42:46.253057 update_engine[1497]: I20260124 02:42:46.252905 1497 omaha_request_action.cc:617] Omaha request response: Jan 24 02:42:46.253057 update_engine[1497]: I20260124 02:42:46.252918 1497 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 02:42:46.253057 update_engine[1497]: I20260124 02:42:46.252928 1497 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 02:42:46.253057 update_engine[1497]: I20260124 02:42:46.252938 1497 update_attempter.cc:306] Processing Done. Jan 24 02:42:46.253057 update_engine[1497]: I20260124 02:42:46.252949 1497 update_attempter.cc:310] Error event sent. Jan 24 02:42:46.255986 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 24 02:42:46.259162 update_engine[1497]: I20260124 02:42:46.258368 1497 update_check_scheduler.cc:74] Next update check in 48m19s Jan 24 02:42:46.260679 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0