Jan 17 01:29:58.014127 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 01:29:58.014194 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 01:29:58.014208 kernel: BIOS-provided physical RAM map: Jan 17 01:29:58.014223 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 01:29:58.014233 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 01:29:58.014243 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 01:29:58.014254 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 17 01:29:58.014264 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 17 01:29:58.014274 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 01:29:58.014283 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 01:29:58.014293 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 01:29:58.014303 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 01:29:58.014335 kernel: NX (Execute Disable) protection: active Jan 17 01:29:58.014347 kernel: APIC: Static calls initialized Jan 17 01:29:58.014358 kernel: SMBIOS 2.8 present. Jan 17 01:29:58.014375 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Jan 17 01:29:58.014386 kernel: Hypervisor detected: KVM Jan 17 01:29:58.014403 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 01:29:58.014414 kernel: kvm-clock: using sched offset of 5072799751 cycles Jan 17 01:29:58.014426 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 01:29:58.014437 kernel: tsc: Detected 2799.998 MHz processor Jan 17 01:29:58.014448 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 01:29:58.014459 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 01:29:58.014470 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 17 01:29:58.014481 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 01:29:58.014492 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 01:29:58.014508 kernel: Using GB pages for direct mapping Jan 17 01:29:58.014519 kernel: ACPI: Early table checksum verification disabled Jan 17 01:29:58.014530 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Jan 17 01:29:58.014541 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:29:58.014552 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:29:58.014563 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:29:58.014574 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 17 01:29:58.014585 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:29:58.014596 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:29:58.014612 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:29:58.014623 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:29:58.014634 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 17 01:29:58.014645 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 17 01:29:58.014657 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 17 01:29:58.014674 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 17 01:29:58.014685 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 17 01:29:58.014702 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 17 01:29:58.014714 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 17 01:29:58.014725 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 01:29:58.014742 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 01:29:58.014754 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 17 01:29:58.014765 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 17 01:29:58.014777 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 17 01:29:58.014788 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 17 01:29:58.014805 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 17 01:29:58.014817 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 17 01:29:58.014828 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 17 01:29:58.014839 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 17 01:29:58.014851 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 17 01:29:58.014862 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 17 01:29:58.014873 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 17 01:29:58.014885 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 17 01:29:58.014901 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 17 01:29:58.014918 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 17 01:29:58.014930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 01:29:58.014942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 01:29:58.014953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 17 01:29:58.014965 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 17 01:29:58.014977 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 17 01:29:58.014988 kernel: Zone ranges: Jan 17 01:29:58.015000 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 01:29:58.015011 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 17 01:29:58.015027 kernel: Normal empty Jan 17 01:29:58.015040 kernel: Movable zone start for each node Jan 17 01:29:58.015051 kernel: Early memory node ranges Jan 17 01:29:58.015062 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 01:29:58.015074 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 17 01:29:58.015085 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 17 01:29:58.015097 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 01:29:58.015108 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 01:29:58.015125 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 17 01:29:58.015154 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 01:29:58.015174 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 01:29:58.015186 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 01:29:58.015198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 01:29:58.015209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 01:29:58.015221 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 01:29:58.015232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 01:29:58.015244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 01:29:58.015255 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 01:29:58.015267 kernel: TSC deadline timer available Jan 17 01:29:58.015283 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 17 01:29:58.015295 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 01:29:58.015307 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 01:29:58.015328 kernel: Booting paravirtualized kernel on KVM Jan 17 01:29:58.015341 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 01:29:58.015352 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 17 01:29:58.015364 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 17 01:29:58.015375 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 17 01:29:58.015386 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 17 01:29:58.015404 kernel: kvm-guest: PV spinlocks enabled Jan 17 01:29:58.015415 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 01:29:58.015428 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 01:29:58.015440 kernel: random: crng init done Jan 17 01:29:58.015452 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 01:29:58.015463 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 01:29:58.015475 kernel: Fallback order for Node 0: 0 Jan 17 01:29:58.015486 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 17 01:29:58.015503 kernel: Policy zone: DMA32 Jan 17 01:29:58.015520 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 01:29:58.015532 kernel: software IO TLB: area num 16. Jan 17 01:29:58.015544 kernel: Memory: 1901596K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194760K reserved, 0K cma-reserved) Jan 17 01:29:58.015556 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 17 01:29:58.015568 kernel: Kernel/User page tables isolation: enabled Jan 17 01:29:58.015579 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 01:29:58.015590 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 01:29:58.015602 kernel: Dynamic Preempt: voluntary Jan 17 01:29:58.015619 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 01:29:58.015632 kernel: rcu: RCU event tracing is enabled. Jan 17 01:29:58.015644 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 17 01:29:58.015655 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 01:29:58.015667 kernel: Rude variant of Tasks RCU enabled. Jan 17 01:29:58.015691 kernel: Tracing variant of Tasks RCU enabled. Jan 17 01:29:58.015708 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 01:29:58.015720 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 17 01:29:58.015732 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 17 01:29:58.015744 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 01:29:58.015756 kernel: Console: colour VGA+ 80x25 Jan 17 01:29:58.015768 kernel: printk: console [tty0] enabled Jan 17 01:29:58.015785 kernel: printk: console [ttyS0] enabled Jan 17 01:29:58.015797 kernel: ACPI: Core revision 20230628 Jan 17 01:29:58.015809 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 01:29:58.015821 kernel: x2apic enabled Jan 17 01:29:58.015833 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 01:29:58.015851 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 17 01:29:58.015868 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 17 01:29:58.015881 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 01:29:58.015893 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 01:29:58.015905 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 01:29:58.015917 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 01:29:58.015929 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 01:29:58.015941 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 01:29:58.015953 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 01:29:58.015971 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 01:29:58.015983 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 01:29:58.015995 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 01:29:58.016007 kernel: MMIO Stale Data: Unknown: No mitigations Jan 17 01:29:58.016019 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 17 01:29:58.016030 kernel: active return thunk: its_return_thunk Jan 17 01:29:58.016042 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 01:29:58.016071 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 01:29:58.016084 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 01:29:58.016096 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 01:29:58.016108 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 01:29:58.016125 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 01:29:58.016856 kernel: Freeing SMP alternatives memory: 32K Jan 17 01:29:58.016884 kernel: pid_max: default: 32768 minimum: 301 Jan 17 01:29:58.016898 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 01:29:58.016910 kernel: landlock: Up and running. Jan 17 01:29:58.016923 kernel: SELinux: Initializing. Jan 17 01:29:58.016935 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 01:29:58.016947 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 01:29:58.016959 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 17 01:29:58.016971 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 01:29:58.016984 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 01:29:58.017004 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 01:29:58.017016 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 17 01:29:58.017029 kernel: signal: max sigframe size: 1776 Jan 17 01:29:58.017041 kernel: rcu: Hierarchical SRCU implementation. Jan 17 01:29:58.017054 kernel: rcu: Max phase no-delay instances is 400. Jan 17 01:29:58.017066 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 01:29:58.017078 kernel: smp: Bringing up secondary CPUs ... Jan 17 01:29:58.017090 kernel: smpboot: x86: Booting SMP configuration: Jan 17 01:29:58.017103 kernel: .... node #0, CPUs: #1 Jan 17 01:29:58.017120 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 17 01:29:58.017132 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 01:29:58.017172 kernel: smpboot: Max logical packages: 16 Jan 17 01:29:58.017185 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 17 01:29:58.017197 kernel: devtmpfs: initialized Jan 17 01:29:58.017210 kernel: x86/mm: Memory block size: 128MB Jan 17 01:29:58.017222 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 01:29:58.017234 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 17 01:29:58.017246 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 01:29:58.017265 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 01:29:58.017277 kernel: audit: initializing netlink subsys (disabled) Jan 17 01:29:58.017289 kernel: audit: type=2000 audit(1768613396.531:1): state=initialized audit_enabled=0 res=1 Jan 17 01:29:58.017301 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 01:29:58.017331 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 01:29:58.017345 kernel: cpuidle: using governor menu Jan 17 01:29:58.017358 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 01:29:58.017370 kernel: dca service started, version 1.12.1 Jan 17 01:29:58.017382 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 01:29:58.017399 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 01:29:58.017412 kernel: PCI: Using configuration type 1 for base access Jan 17 01:29:58.017424 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 01:29:58.017436 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 01:29:58.017448 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 01:29:58.017460 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 01:29:58.017472 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 01:29:58.017484 kernel: ACPI: Added _OSI(Module Device) Jan 17 01:29:58.017496 kernel: ACPI: Added _OSI(Processor Device) Jan 17 01:29:58.017513 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 01:29:58.017525 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 01:29:58.017538 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 01:29:58.017550 kernel: ACPI: Interpreter enabled Jan 17 01:29:58.017561 kernel: ACPI: PM: (supports S0 S5) Jan 17 01:29:58.017573 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 01:29:58.017586 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 01:29:58.017598 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 01:29:58.017610 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 01:29:58.017626 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 01:29:58.017912 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 01:29:58.018108 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 01:29:58.018300 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 01:29:58.018370 kernel: PCI host bridge to bus 0000:00 Jan 17 01:29:58.018548 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 01:29:58.018705 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 01:29:58.018867 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 01:29:58.019020 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 17 01:29:58.019197 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 01:29:58.019370 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 17 01:29:58.019525 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 01:29:58.019751 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 01:29:58.019957 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 17 01:29:58.020130 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 17 01:29:58.021248 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 17 01:29:58.021440 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 17 01:29:58.021612 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 01:29:58.021805 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 01:29:58.021980 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 17 01:29:58.022445 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 01:29:58.022621 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 17 01:29:58.022799 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 01:29:58.022966 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 17 01:29:58.023180 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 01:29:58.025295 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 17 01:29:58.025532 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 01:29:58.025708 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 17 01:29:58.025889 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 01:29:58.026077 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 17 01:29:58.028346 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 01:29:58.028529 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 17 01:29:58.028734 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 01:29:58.028905 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 17 01:29:58.029106 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 01:29:58.029325 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Jan 17 01:29:58.029502 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 17 01:29:58.029675 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 17 01:29:58.029875 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 17 01:29:58.030065 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 17 01:29:58.031367 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Jan 17 01:29:58.031548 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 17 01:29:58.031720 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 17 01:29:58.031900 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 01:29:58.032068 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 01:29:58.032280 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 01:29:58.032498 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Jan 17 01:29:58.032889 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 17 01:29:58.033096 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 01:29:58.033373 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 01:29:58.033575 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 17 01:29:58.033755 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 17 01:29:58.033942 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 01:29:58.034114 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 17 01:29:58.034341 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 01:29:58.034513 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:29:58.034709 kernel: pci_bus 0000:02: extended config space not accessible Jan 17 01:29:58.034905 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 17 01:29:58.035101 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 17 01:29:58.036023 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 01:29:58.036265 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 17 01:29:58.036461 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 01:29:58.036639 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:29:58.036828 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 01:29:58.037006 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 17 01:29:58.037214 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 01:29:58.037396 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 01:29:58.037562 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 01:29:58.037762 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 01:29:58.037936 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 17 01:29:58.038105 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 01:29:58.040366 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 01:29:58.040556 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 01:29:58.040731 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 01:29:58.040901 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 01:29:58.041069 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 01:29:58.042291 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 01:29:58.042483 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 01:29:58.042656 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 01:29:58.042830 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 01:29:58.043006 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 01:29:58.045215 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 01:29:58.045418 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 01:29:58.045593 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 01:29:58.045785 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 01:29:58.045956 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 01:29:58.046123 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 01:29:58.046340 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 01:29:58.046369 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 01:29:58.046383 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 01:29:58.046396 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 01:29:58.046408 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 01:29:58.046421 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 01:29:58.046433 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 01:29:58.046445 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 01:29:58.046458 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 01:29:58.046470 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 01:29:58.046487 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 01:29:58.046500 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 01:29:58.046512 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 01:29:58.046525 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 01:29:58.046537 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 01:29:58.046549 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 01:29:58.046561 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 01:29:58.046573 kernel: iommu: Default domain type: Translated Jan 17 01:29:58.046586 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 01:29:58.046603 kernel: PCI: Using ACPI for IRQ routing Jan 17 01:29:58.046615 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 01:29:58.046628 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 01:29:58.046640 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 17 01:29:58.046816 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 01:29:58.046996 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 01:29:58.048911 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 01:29:58.048937 kernel: vgaarb: loaded Jan 17 01:29:58.048950 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 01:29:58.048971 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 01:29:58.048983 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 01:29:58.048996 kernel: pnp: PnP ACPI init Jan 17 01:29:58.050266 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 01:29:58.050289 kernel: pnp: PnP ACPI: found 5 devices Jan 17 01:29:58.050302 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 01:29:58.050326 kernel: NET: Registered PF_INET protocol family Jan 17 01:29:58.050339 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 01:29:58.050360 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 01:29:58.050374 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 01:29:58.050386 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 01:29:58.050399 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 01:29:58.050411 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 01:29:58.050423 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 01:29:58.050436 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 01:29:58.050449 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 01:29:58.050461 kernel: NET: Registered PF_XDP protocol family Jan 17 01:29:58.050639 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 17 01:29:58.050808 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 17 01:29:58.050974 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 17 01:29:58.051153 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 01:29:58.051337 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 01:29:58.051505 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 01:29:58.051681 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 01:29:58.051847 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 01:29:58.052011 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 01:29:58.055213 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 01:29:58.055400 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Jan 17 01:29:58.055567 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Jan 17 01:29:58.055735 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Jan 17 01:29:58.055913 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Jan 17 01:29:58.056115 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 01:29:58.056336 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 17 01:29:58.056510 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 01:29:58.056681 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:29:58.056849 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 01:29:58.057014 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 17 01:29:58.059212 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 01:29:58.059395 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:29:58.059562 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 01:29:58.059737 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Jan 17 01:29:58.059906 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 01:29:58.060077 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 01:29:58.060268 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 01:29:58.060450 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Jan 17 01:29:58.060630 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 01:29:58.060800 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 01:29:58.060969 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 01:29:58.062183 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Jan 17 01:29:58.062375 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 01:29:58.062543 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 01:29:58.062708 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 01:29:58.062872 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Jan 17 01:29:58.063037 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 01:29:58.064271 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 01:29:58.064452 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 01:29:58.064618 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Jan 17 01:29:58.064783 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 01:29:58.064949 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 01:29:58.065125 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 01:29:58.067342 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Jan 17 01:29:58.067510 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 01:29:58.067678 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 01:29:58.067846 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 01:29:58.068014 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Jan 17 01:29:58.068222 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 01:29:58.068433 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 01:29:58.068594 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 01:29:58.068753 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 01:29:58.068902 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 01:29:58.069051 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 17 01:29:58.070244 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 01:29:58.070411 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 17 01:29:58.070585 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Jan 17 01:29:58.070744 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 17 01:29:58.070909 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:29:58.071080 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Jan 17 01:29:58.071266 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 17 01:29:58.071444 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:29:58.071616 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Jan 17 01:29:58.071776 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 17 01:29:58.071935 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 01:29:58.072125 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Jan 17 01:29:58.075386 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 17 01:29:58.075569 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 01:29:58.075742 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Jan 17 01:29:58.075904 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 17 01:29:58.076063 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 01:29:58.076295 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Jan 17 01:29:58.076476 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 17 01:29:58.076635 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 01:29:58.076819 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Jan 17 01:29:58.076977 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 17 01:29:58.077134 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 01:29:58.078363 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Jan 17 01:29:58.078522 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 17 01:29:58.078687 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 01:29:58.078863 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Jan 17 01:29:58.079021 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 17 01:29:58.079216 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 01:29:58.079238 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 01:29:58.079252 kernel: PCI: CLS 0 bytes, default 64 Jan 17 01:29:58.079273 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 01:29:58.079286 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 17 01:29:58.079300 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 01:29:58.079323 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 17 01:29:58.079338 kernel: Initialise system trusted keyrings Jan 17 01:29:58.079351 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 01:29:58.079364 kernel: Key type asymmetric registered Jan 17 01:29:58.079377 kernel: Asymmetric key parser 'x509' registered Jan 17 01:29:58.079390 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 01:29:58.079409 kernel: io scheduler mq-deadline registered Jan 17 01:29:58.079422 kernel: io scheduler kyber registered Jan 17 01:29:58.079435 kernel: io scheduler bfq registered Jan 17 01:29:58.079604 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 01:29:58.079773 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 01:29:58.079939 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:29:58.080116 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 01:29:58.080320 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 01:29:58.080500 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:29:58.080667 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 01:29:58.080837 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 01:29:58.081019 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:29:58.081226 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 01:29:58.081404 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 01:29:58.081580 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:29:58.081747 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 01:29:58.081912 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 01:29:58.082078 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:29:58.082273 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 01:29:58.082453 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 01:29:58.082631 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:29:58.082799 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 01:29:58.082969 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 01:29:58.083173 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:29:58.083359 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 01:29:58.083526 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 01:29:58.083700 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:29:58.083721 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 01:29:58.083735 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 01:29:58.083748 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 01:29:58.083761 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 01:29:58.083781 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 01:29:58.083795 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 01:29:58.083808 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 01:29:58.083826 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 01:29:58.083839 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 01:29:58.084035 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 01:29:58.084221 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 01:29:58.084395 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T01:29:57 UTC (1768613397) Jan 17 01:29:58.084550 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 01:29:58.084570 kernel: intel_pstate: CPU model not supported Jan 17 01:29:58.084583 kernel: NET: Registered PF_INET6 protocol family Jan 17 01:29:58.084604 kernel: Segment Routing with IPv6 Jan 17 01:29:58.084617 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 01:29:58.084630 kernel: NET: Registered PF_PACKET protocol family Jan 17 01:29:58.084643 kernel: Key type dns_resolver registered Jan 17 01:29:58.084656 kernel: IPI shorthand broadcast: enabled Jan 17 01:29:58.084669 kernel: sched_clock: Marking stable (1510003715, 223211889)->(1870550355, -137334751) Jan 17 01:29:58.084682 kernel: registered taskstats version 1 Jan 17 01:29:58.084695 kernel: Loading compiled-in X.509 certificates Jan 17 01:29:58.084708 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 01:29:58.084726 kernel: Key type .fscrypt registered Jan 17 01:29:58.084739 kernel: Key type fscrypt-provisioning registered Jan 17 01:29:58.084752 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 01:29:58.084765 kernel: ima: Allocated hash algorithm: sha1 Jan 17 01:29:58.084778 kernel: ima: No architecture policies found Jan 17 01:29:58.084791 kernel: clk: Disabling unused clocks Jan 17 01:29:58.084804 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 01:29:58.084817 kernel: Write protecting the kernel read-only data: 36864k Jan 17 01:29:58.084830 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 01:29:58.084848 kernel: Run /init as init process Jan 17 01:29:58.084861 kernel: with arguments: Jan 17 01:29:58.084874 kernel: /init Jan 17 01:29:58.084886 kernel: with environment: Jan 17 01:29:58.084899 kernel: HOME=/ Jan 17 01:29:58.084912 kernel: TERM=linux Jan 17 01:29:58.084927 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 01:29:58.084943 systemd[1]: Detected virtualization kvm. Jan 17 01:29:58.084963 systemd[1]: Detected architecture x86-64. Jan 17 01:29:58.084977 systemd[1]: Running in initrd. Jan 17 01:29:58.084991 systemd[1]: No hostname configured, using default hostname. Jan 17 01:29:58.085004 systemd[1]: Hostname set to . Jan 17 01:29:58.085018 systemd[1]: Initializing machine ID from VM UUID. Jan 17 01:29:58.085031 systemd[1]: Queued start job for default target initrd.target. Jan 17 01:29:58.085045 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 01:29:58.085059 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 01:29:58.085079 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 01:29:58.085093 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 01:29:58.085107 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 01:29:58.085121 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 01:29:58.085148 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 01:29:58.085178 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 01:29:58.085192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 01:29:58.085213 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 01:29:58.085227 systemd[1]: Reached target paths.target - Path Units. Jan 17 01:29:58.085241 systemd[1]: Reached target slices.target - Slice Units. Jan 17 01:29:58.085255 systemd[1]: Reached target swap.target - Swaps. Jan 17 01:29:58.085269 systemd[1]: Reached target timers.target - Timer Units. Jan 17 01:29:58.085283 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 01:29:58.085297 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 01:29:58.085320 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 01:29:58.085342 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 01:29:58.085356 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 01:29:58.085370 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 01:29:58.085384 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 01:29:58.085398 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 01:29:58.085412 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 01:29:58.085426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 01:29:58.085445 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 01:29:58.085459 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 01:29:58.085478 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 01:29:58.085492 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 01:29:58.085506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:29:58.085558 systemd-journald[203]: Collecting audit messages is disabled. Jan 17 01:29:58.085595 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 01:29:58.085610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 01:29:58.085624 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 01:29:58.085639 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 01:29:58.085658 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 01:29:58.085672 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 01:29:58.085685 kernel: Bridge firewalling registered Jan 17 01:29:58.085699 systemd-journald[203]: Journal started Jan 17 01:29:58.085724 systemd-journald[203]: Runtime Journal (/run/log/journal/7b0aceb9937d40d39fe3241efee5cc5e) is 4.7M, max 38.0M, 33.2M free. Jan 17 01:29:58.029736 systemd-modules-load[204]: Inserted module 'overlay' Jan 17 01:29:58.132563 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 01:29:58.077737 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 17 01:29:58.139914 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 01:29:58.140926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:29:58.149413 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 01:29:58.151329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 01:29:58.158503 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 01:29:58.162334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 01:29:58.178821 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:29:58.181216 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:29:58.192404 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 01:29:58.194451 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 01:29:58.196771 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 01:29:58.202176 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 01:29:58.210868 dracut-cmdline[237]: dracut-dracut-053 Jan 17 01:29:58.215788 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 01:29:58.252067 systemd-resolved[243]: Positive Trust Anchors: Jan 17 01:29:58.252094 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 01:29:58.252148 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 01:29:58.256109 systemd-resolved[243]: Defaulting to hostname 'linux'. Jan 17 01:29:58.257763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 01:29:58.258986 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 01:29:58.326201 kernel: SCSI subsystem initialized Jan 17 01:29:58.337174 kernel: Loading iSCSI transport class v2.0-870. Jan 17 01:29:58.350538 kernel: iscsi: registered transport (tcp) Jan 17 01:29:58.375663 kernel: iscsi: registered transport (qla4xxx) Jan 17 01:29:58.375742 kernel: QLogic iSCSI HBA Driver Jan 17 01:29:58.429812 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 01:29:58.439441 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 01:29:58.469260 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 01:29:58.469348 kernel: device-mapper: uevent: version 1.0.3 Jan 17 01:29:58.470767 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 01:29:58.519191 kernel: raid6: sse2x4 gen() 14472 MB/s Jan 17 01:29:58.537176 kernel: raid6: sse2x2 gen() 10073 MB/s Jan 17 01:29:58.555705 kernel: raid6: sse2x1 gen() 10643 MB/s Jan 17 01:29:58.555765 kernel: raid6: using algorithm sse2x4 gen() 14472 MB/s Jan 17 01:29:58.574756 kernel: raid6: .... xor() 7965 MB/s, rmw enabled Jan 17 01:29:58.574819 kernel: raid6: using ssse3x2 recovery algorithm Jan 17 01:29:58.599205 kernel: xor: automatically using best checksumming function avx Jan 17 01:29:58.801181 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 01:29:58.814922 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 01:29:58.822378 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 01:29:58.850952 systemd-udevd[423]: Using default interface naming scheme 'v255'. Jan 17 01:29:58.857600 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 01:29:58.865572 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 01:29:58.888897 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Jan 17 01:29:58.940522 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 01:29:58.949414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 01:29:59.070700 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 01:29:59.078359 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 01:29:59.116728 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 01:29:59.119105 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 01:29:59.121896 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 01:29:59.124851 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 01:29:59.133874 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 01:29:59.174340 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 01:29:59.218177 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 17 01:29:59.232219 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 01:29:59.238171 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 01:29:59.243166 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 01:29:59.243365 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:29:59.260425 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 01:29:59.260529 kernel: GPT:17805311 != 125829119 Jan 17 01:29:59.260574 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 01:29:59.260595 kernel: GPT:17805311 != 125829119 Jan 17 01:29:59.260632 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 01:29:59.260650 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:29:59.259528 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 01:29:59.261095 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 01:29:59.261830 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:29:59.262751 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:29:59.271174 kernel: ACPI: bus type USB registered Jan 17 01:29:59.275917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:29:59.285277 kernel: usbcore: registered new interface driver usbfs Jan 17 01:29:59.287160 kernel: libata version 3.00 loaded. Jan 17 01:29:59.293185 kernel: usbcore: registered new interface driver hub Jan 17 01:29:59.299171 kernel: usbcore: registered new device driver usb Jan 17 01:29:59.310376 kernel: AVX version of gcm_enc/dec engaged. Jan 17 01:29:59.310444 kernel: AES CTR mode by8 optimization enabled Jan 17 01:29:59.361196 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 01:29:59.361640 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 17 01:29:59.361856 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 01:29:59.363163 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 01:29:59.363468 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 17 01:29:59.363675 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 17 01:29:59.363899 kernel: hub 1-0:1.0: USB hub found Jan 17 01:29:59.364206 kernel: hub 1-0:1.0: 4 ports detected Jan 17 01:29:59.364438 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 01:29:59.364677 kernel: hub 2-0:1.0: USB hub found Jan 17 01:29:59.364980 kernel: hub 2-0:1.0: 4 ports detected Jan 17 01:29:59.368972 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Jan 17 01:29:59.393180 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 01:29:59.399213 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 01:29:59.401935 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 01:29:59.480464 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (476) Jan 17 01:29:59.480553 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 01:29:59.480950 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 01:29:59.482644 kernel: scsi host0: ahci Jan 17 01:29:59.482891 kernel: scsi host1: ahci Jan 17 01:29:59.483914 kernel: scsi host2: ahci Jan 17 01:29:59.484226 kernel: scsi host3: ahci Jan 17 01:29:59.484477 kernel: scsi host4: ahci Jan 17 01:29:59.484761 kernel: scsi host5: ahci Jan 17 01:29:59.484996 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 17 01:29:59.485017 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 17 01:29:59.485035 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 17 01:29:59.485052 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 17 01:29:59.485068 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 17 01:29:59.485085 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 17 01:29:59.486629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:29:59.494844 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 01:29:59.501701 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 01:29:59.507358 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 01:29:59.508132 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 01:29:59.520471 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 01:29:59.524674 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 01:29:59.530165 disk-uuid[566]: Primary Header is updated. Jan 17 01:29:59.530165 disk-uuid[566]: Secondary Entries is updated. Jan 17 01:29:59.530165 disk-uuid[566]: Secondary Header is updated. Jan 17 01:29:59.539545 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:29:59.544416 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:29:59.552188 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:29:59.578952 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:29:59.607261 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 01:29:59.737174 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 01:29:59.737311 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 01:29:59.740424 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 01:29:59.741160 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 01:29:59.749167 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 01:29:59.749245 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 01:29:59.770176 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 01:29:59.779583 kernel: usbcore: registered new interface driver usbhid Jan 17 01:29:59.779668 kernel: usbhid: USB HID core driver Jan 17 01:29:59.793178 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 17 01:29:59.802181 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 17 01:30:00.552200 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:30:00.553174 disk-uuid[567]: The operation has completed successfully. Jan 17 01:30:00.615192 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 01:30:00.615379 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 01:30:00.632368 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 01:30:00.653399 sh[589]: Success Jan 17 01:30:00.670186 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 17 01:30:00.739522 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 01:30:00.741474 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 01:30:00.744111 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 01:30:00.778181 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 01:30:00.778265 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:30:00.778288 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 01:30:00.778306 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 01:30:00.781128 kernel: BTRFS info (device dm-0): using free space tree Jan 17 01:30:00.791663 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 01:30:00.793193 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 01:30:00.800377 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 01:30:00.803341 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 01:30:00.818306 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:30:00.818364 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:30:00.818384 kernel: BTRFS info (device vda6): using free space tree Jan 17 01:30:00.827171 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 01:30:00.841515 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:30:00.841122 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 01:30:00.853555 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 01:30:00.862389 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 01:30:00.939940 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 01:30:00.950661 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 01:30:01.041385 systemd-networkd[770]: lo: Link UP Jan 17 01:30:01.041399 systemd-networkd[770]: lo: Gained carrier Jan 17 01:30:01.044538 systemd-networkd[770]: Enumeration completed Jan 17 01:30:01.045031 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 01:30:01.045099 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:30:01.045104 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 01:30:01.046196 systemd[1]: Reached target network.target - Network. Jan 17 01:30:01.046397 systemd-networkd[770]: eth0: Link UP Jan 17 01:30:01.046403 systemd-networkd[770]: eth0: Gained carrier Jan 17 01:30:01.046414 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:30:01.108214 ignition[681]: Ignition 2.19.0 Jan 17 01:30:01.108246 ignition[681]: Stage: fetch-offline Jan 17 01:30:01.110446 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 01:30:01.108342 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:30:01.108373 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:30:01.108552 ignition[681]: parsed url from cmdline: "" Jan 17 01:30:01.108559 ignition[681]: no config URL provided Jan 17 01:30:01.108568 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 01:30:01.108583 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jan 17 01:30:01.108592 ignition[681]: failed to fetch config: resource requires networking Jan 17 01:30:01.109089 ignition[681]: Ignition finished successfully Jan 17 01:30:01.118787 systemd-networkd[770]: eth0: DHCPv4 address 10.243.73.142/30, gateway 10.243.73.141 acquired from 10.243.73.141 Jan 17 01:30:01.121766 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 01:30:01.168421 ignition[780]: Ignition 2.19.0 Jan 17 01:30:01.168441 ignition[780]: Stage: fetch Jan 17 01:30:01.168679 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:30:01.168699 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:30:01.168871 ignition[780]: parsed url from cmdline: "" Jan 17 01:30:01.168878 ignition[780]: no config URL provided Jan 17 01:30:01.168888 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 01:30:01.168904 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jan 17 01:30:01.169011 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 17 01:30:01.169054 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 17 01:30:01.169092 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 17 01:30:01.187016 ignition[780]: GET result: OK Jan 17 01:30:01.187999 ignition[780]: parsing config with SHA512: 15885754570a1e35072a51631e268a4ee894038d80141abefe079c523ae257a4c1ce2766561033a3868764a76d1be0036bca024cb26e20b0f6cdb7d8c6447ac5 Jan 17 01:30:01.197359 unknown[780]: fetched base config from "system" Jan 17 01:30:01.197377 unknown[780]: fetched base config from "system" Jan 17 01:30:01.198020 ignition[780]: fetch: fetch complete Jan 17 01:30:01.197387 unknown[780]: fetched user config from "openstack" Jan 17 01:30:01.198029 ignition[780]: fetch: fetch passed Jan 17 01:30:01.199877 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 01:30:01.198108 ignition[780]: Ignition finished successfully Jan 17 01:30:01.207364 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 01:30:01.230657 ignition[788]: Ignition 2.19.0 Jan 17 01:30:01.231387 ignition[788]: Stage: kargs Jan 17 01:30:01.231600 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:30:01.231620 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:30:01.232795 ignition[788]: kargs: kargs passed Jan 17 01:30:01.232867 ignition[788]: Ignition finished successfully Jan 17 01:30:01.236378 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 01:30:01.247463 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 01:30:01.272451 ignition[794]: Ignition 2.19.0 Jan 17 01:30:01.272473 ignition[794]: Stage: disks Jan 17 01:30:01.272754 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:30:01.275131 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 01:30:01.272774 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:30:01.276535 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 01:30:01.273996 ignition[794]: disks: disks passed Jan 17 01:30:01.277356 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 01:30:01.274086 ignition[794]: Ignition finished successfully Jan 17 01:30:01.278846 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 01:30:01.280495 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 01:30:01.282044 systemd[1]: Reached target basic.target - Basic System. Jan 17 01:30:01.289454 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 01:30:01.309873 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 01:30:01.313937 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 01:30:01.319247 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 01:30:01.448163 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 01:30:01.447558 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 01:30:01.450383 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 01:30:01.459288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 01:30:01.462431 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 01:30:01.463490 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 01:30:01.465593 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 17 01:30:01.467777 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 01:30:01.467818 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 01:30:01.481049 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Jan 17 01:30:01.481085 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:30:01.481103 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:30:01.481119 kernel: BTRFS info (device vda6): using free space tree Jan 17 01:30:01.489130 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 01:30:01.488441 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 01:30:01.495404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 01:30:01.504387 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 01:30:01.615993 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 01:30:01.628279 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 17 01:30:01.638693 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 01:30:01.645630 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 01:30:01.754589 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 01:30:01.761304 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 01:30:01.763355 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 01:30:01.776812 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 01:30:01.779206 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:30:01.807708 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 01:30:01.821051 ignition[929]: INFO : Ignition 2.19.0 Jan 17 01:30:01.821051 ignition[929]: INFO : Stage: mount Jan 17 01:30:01.823287 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 01:30:01.825248 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:30:01.826324 ignition[929]: INFO : mount: mount passed Jan 17 01:30:01.826324 ignition[929]: INFO : Ignition finished successfully Jan 17 01:30:01.828088 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 01:30:02.585526 systemd-networkd[770]: eth0: Gained IPv6LL Jan 17 01:30:04.093607 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d263:24:19ff:fef3:498e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d263:24:19ff:fef3:498e/64 assigned by NDisc. Jan 17 01:30:04.093624 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 01:30:08.683619 coreos-metadata[813]: Jan 17 01:30:08.683 WARN failed to locate config-drive, using the metadata service API instead Jan 17 01:30:08.705755 coreos-metadata[813]: Jan 17 01:30:08.705 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 01:30:08.727811 coreos-metadata[813]: Jan 17 01:30:08.727 INFO Fetch successful Jan 17 01:30:08.728824 coreos-metadata[813]: Jan 17 01:30:08.728 INFO wrote hostname srv-dv3jc.gb1.brightbox.com to /sysroot/etc/hostname Jan 17 01:30:08.731659 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 17 01:30:08.731836 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 17 01:30:08.742497 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 01:30:08.770614 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 01:30:08.797181 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Jan 17 01:30:08.804098 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:30:08.804183 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:30:08.804212 kernel: BTRFS info (device vda6): using free space tree Jan 17 01:30:08.810188 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 01:30:08.812304 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 01:30:08.847602 ignition[962]: INFO : Ignition 2.19.0 Jan 17 01:30:08.848995 ignition[962]: INFO : Stage: files Jan 17 01:30:08.851203 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 01:30:08.851203 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:30:08.851203 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 17 01:30:08.853803 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 01:30:08.853803 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 01:30:08.856673 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 01:30:08.857885 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 01:30:08.859322 unknown[962]: wrote ssh authorized keys file for user: core Jan 17 01:30:08.860370 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 01:30:08.861586 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 01:30:08.862811 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 01:30:09.036094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 01:30:09.305823 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 01:30:09.315838 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 01:30:09.651078 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 01:30:10.736217 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 01:30:10.743292 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 01:30:10.743292 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 01:30:10.743292 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 01:30:10.743292 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 01:30:10.743292 ignition[962]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 01:30:10.743292 ignition[962]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 01:30:10.750946 ignition[962]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 01:30:10.750946 ignition[962]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 01:30:10.750946 ignition[962]: INFO : files: files passed Jan 17 01:30:10.750946 ignition[962]: INFO : Ignition finished successfully Jan 17 01:30:10.752372 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 01:30:10.771627 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 01:30:10.777492 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 01:30:10.782776 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 01:30:10.782990 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 01:30:10.809452 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 01:30:10.809452 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 01:30:10.813104 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 01:30:10.815113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 01:30:10.816682 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 01:30:10.828431 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 01:30:10.859095 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 01:30:10.859297 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 01:30:10.861072 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 01:30:10.862445 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 01:30:10.864005 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 01:30:10.872825 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 01:30:10.890867 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 01:30:10.901363 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 01:30:10.914030 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 01:30:10.914942 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 01:30:10.916555 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 01:30:10.917981 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 01:30:10.918190 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 01:30:10.920065 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 01:30:10.921026 systemd[1]: Stopped target basic.target - Basic System. Jan 17 01:30:10.922536 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 01:30:10.923881 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 01:30:10.925305 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 01:30:10.926739 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 01:30:10.928301 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 01:30:10.929900 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 01:30:10.931337 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 01:30:10.932956 systemd[1]: Stopped target swap.target - Swaps. Jan 17 01:30:10.934385 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 01:30:10.934577 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 01:30:10.936301 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 01:30:10.937333 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 01:30:10.938662 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 01:30:10.940224 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 01:30:10.941092 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 01:30:10.941298 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 01:30:10.943241 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 01:30:10.943424 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 01:30:10.944483 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 01:30:10.944703 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 01:30:10.953874 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 01:30:10.954586 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 01:30:10.954837 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 01:30:10.958421 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 01:30:10.959103 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 01:30:10.959336 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 01:30:10.962373 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 01:30:10.962558 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 01:30:10.977770 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 01:30:10.977927 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 01:30:10.987538 ignition[1015]: INFO : Ignition 2.19.0 Jan 17 01:30:10.989574 ignition[1015]: INFO : Stage: umount Jan 17 01:30:10.989574 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 01:30:10.989574 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:30:10.993324 ignition[1015]: INFO : umount: umount passed Jan 17 01:30:10.993324 ignition[1015]: INFO : Ignition finished successfully Jan 17 01:30:10.993534 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 01:30:10.993736 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 01:30:11.001691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 01:30:11.002295 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 01:30:11.002377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 01:30:11.004932 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 01:30:11.005031 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 01:30:11.005717 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 01:30:11.005782 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 01:30:11.008291 systemd[1]: Stopped target network.target - Network. Jan 17 01:30:11.009299 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 01:30:11.009372 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 01:30:11.010968 systemd[1]: Stopped target paths.target - Path Units. Jan 17 01:30:11.012235 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 01:30:11.012709 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 01:30:11.013732 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 01:30:11.015180 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 01:30:11.016577 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 01:30:11.016651 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 01:30:11.018015 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 01:30:11.018082 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 01:30:11.019639 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 01:30:11.019723 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 01:30:11.020938 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 01:30:11.021033 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 01:30:11.022600 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 01:30:11.024822 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 01:30:11.029651 systemd-networkd[770]: eth0: DHCPv6 lease lost Jan 17 01:30:11.031546 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 01:30:11.031737 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 01:30:11.033784 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 01:30:11.034655 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 01:30:11.038533 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 01:30:11.038866 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 01:30:11.055849 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 01:30:11.058251 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 01:30:11.058348 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 01:30:11.059175 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 01:30:11.059243 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:30:11.060808 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 01:30:11.060879 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 01:30:11.062559 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 01:30:11.062631 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 01:30:11.064303 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 01:30:11.076212 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 01:30:11.076497 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 01:30:11.078574 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 01:30:11.078723 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 01:30:11.080241 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 01:30:11.080367 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 01:30:11.081822 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 01:30:11.081876 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 01:30:11.083376 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 01:30:11.083451 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 01:30:11.085569 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 01:30:11.085639 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 01:30:11.087013 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 01:30:11.087100 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:30:11.093337 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 01:30:11.094089 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 01:30:11.094173 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 01:30:11.095701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 01:30:11.095767 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:30:11.114944 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 01:30:11.115126 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 01:30:11.162581 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 01:30:11.162780 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 01:30:11.164872 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 01:30:11.165610 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 01:30:11.165696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 01:30:11.172340 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 01:30:11.182808 systemd[1]: Switching root. Jan 17 01:30:11.219525 systemd-journald[203]: Journal stopped Jan 17 01:30:12.725096 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 17 01:30:12.725229 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 01:30:12.725257 kernel: SELinux: policy capability open_perms=1 Jan 17 01:30:12.725295 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 01:30:12.725315 kernel: SELinux: policy capability always_check_network=0 Jan 17 01:30:12.725333 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 01:30:12.725352 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 01:30:12.725377 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 01:30:12.725395 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 01:30:12.725414 kernel: audit: type=1403 audit(1768613411.472:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 01:30:12.725433 systemd[1]: Successfully loaded SELinux policy in 47.476ms. Jan 17 01:30:12.725480 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.705ms. Jan 17 01:30:12.725504 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 01:30:12.725525 systemd[1]: Detected virtualization kvm. Jan 17 01:30:12.725553 systemd[1]: Detected architecture x86-64. Jan 17 01:30:12.725573 systemd[1]: Detected first boot. Jan 17 01:30:12.725592 systemd[1]: Hostname set to . Jan 17 01:30:12.725627 systemd[1]: Initializing machine ID from VM UUID. Jan 17 01:30:12.725648 zram_generator::config[1060]: No configuration found. Jan 17 01:30:12.725680 systemd[1]: Populated /etc with preset unit settings. Jan 17 01:30:12.725702 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 01:30:12.725728 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 01:30:12.725748 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 01:30:12.725769 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 01:30:12.725789 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 01:30:12.725808 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 01:30:12.725827 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 01:30:12.725847 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 01:30:12.725878 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 01:30:12.725900 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 01:30:12.725919 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 01:30:12.725939 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 01:30:12.725959 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 01:30:12.725991 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 01:30:12.726011 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 01:30:12.726031 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 01:30:12.726066 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 01:30:12.726088 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 01:30:12.726117 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 01:30:12.726154 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 01:30:12.726178 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 01:30:12.726197 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 01:30:12.726231 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 01:30:12.726253 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 01:30:12.726273 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 01:30:12.726292 systemd[1]: Reached target slices.target - Slice Units. Jan 17 01:30:12.726311 systemd[1]: Reached target swap.target - Swaps. Jan 17 01:30:12.726331 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 01:30:12.726360 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 01:30:12.726402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 01:30:12.726435 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 01:30:12.726458 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 01:30:12.726477 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 01:30:12.726497 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 01:30:12.726522 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 01:30:12.726543 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 01:30:12.726563 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:30:12.726595 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 01:30:12.726617 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 01:30:12.726647 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 01:30:12.726668 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 01:30:12.726689 systemd[1]: Reached target machines.target - Containers. Jan 17 01:30:12.726709 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 01:30:12.726728 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:30:12.726757 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 01:30:12.726779 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 01:30:12.726814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 01:30:12.726835 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 01:30:12.726856 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 01:30:12.726875 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 01:30:12.726895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 01:30:12.726914 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 01:30:12.726935 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 01:30:12.726955 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 01:30:12.727017 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 01:30:12.727043 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 01:30:12.727063 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 01:30:12.727083 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 01:30:12.727103 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 01:30:12.727122 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 01:30:12.727202 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 01:30:12.727230 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 01:30:12.727250 systemd[1]: Stopped verity-setup.service. Jan 17 01:30:12.727286 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:30:12.727308 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 01:30:12.727328 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 01:30:12.727348 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 01:30:12.727367 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 01:30:12.727400 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 01:30:12.727461 systemd-journald[1146]: Collecting audit messages is disabled. Jan 17 01:30:12.727511 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 01:30:12.727535 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 01:30:12.727555 systemd-journald[1146]: Journal started Jan 17 01:30:12.727587 systemd-journald[1146]: Runtime Journal (/run/log/journal/7b0aceb9937d40d39fe3241efee5cc5e) is 4.7M, max 38.0M, 33.2M free. Jan 17 01:30:12.287547 systemd[1]: Queued start job for default target multi-user.target. Jan 17 01:30:12.306370 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 01:30:12.307020 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 01:30:12.733226 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 01:30:12.735374 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 01:30:12.736534 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 01:30:12.736769 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 01:30:12.738298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 01:30:12.738499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 01:30:12.740742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 01:30:12.740975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 01:30:12.742555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 01:30:12.744615 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 01:30:12.745860 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 01:30:12.768166 kernel: ACPI: bus type drm_connector registered Jan 17 01:30:12.766173 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 01:30:12.772158 kernel: fuse: init (API version 7.39) Jan 17 01:30:12.776226 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 01:30:12.778896 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 01:30:12.778946 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 01:30:12.780985 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 01:30:12.790194 kernel: loop: module loaded Jan 17 01:30:12.792323 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 01:30:12.797364 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 01:30:12.798243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:30:12.800055 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 01:30:12.808328 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 01:30:12.809254 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 01:30:12.815362 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 01:30:12.819340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 01:30:12.829360 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 01:30:12.832332 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 01:30:12.835308 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 01:30:12.835994 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 01:30:12.839117 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 01:30:12.839350 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 01:30:12.840650 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 01:30:12.840872 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 01:30:12.841763 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 01:30:12.843934 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 01:30:12.858525 systemd-journald[1146]: Time spent on flushing to /var/log/journal/7b0aceb9937d40d39fe3241efee5cc5e is 180.602ms for 1139 entries. Jan 17 01:30:12.858525 systemd-journald[1146]: System Journal (/var/log/journal/7b0aceb9937d40d39fe3241efee5cc5e) is 8.0M, max 584.8M, 576.8M free. Jan 17 01:30:13.080272 systemd-journald[1146]: Received client request to flush runtime journal. Jan 17 01:30:13.080350 kernel: loop0: detected capacity change from 0 to 224512 Jan 17 01:30:13.080393 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 01:30:13.080426 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 01:30:12.874379 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 01:30:12.875327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 01:30:12.916242 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 01:30:12.917314 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 01:30:12.919085 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 01:30:12.931412 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 01:30:12.985996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 01:30:12.987543 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 01:30:13.002749 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:30:13.007362 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 01:30:13.016348 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 01:30:13.083213 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 01:30:13.134960 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 01:30:13.132349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 01:30:13.136763 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 17 01:30:13.136783 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 17 01:30:13.151352 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 01:30:13.177852 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 01:30:13.217189 kernel: loop3: detected capacity change from 0 to 8 Jan 17 01:30:13.219250 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 01:30:13.260036 kernel: loop4: detected capacity change from 0 to 224512 Jan 17 01:30:13.283484 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 01:30:13.304166 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 01:30:13.331174 kernel: loop7: detected capacity change from 0 to 8 Jan 17 01:30:13.338604 (sd-merge)[1218]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 17 01:30:13.339590 (sd-merge)[1218]: Merged extensions into '/usr'. Jan 17 01:30:13.353751 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 01:30:13.353773 systemd[1]: Reloading... Jan 17 01:30:13.621174 zram_generator::config[1245]: No configuration found. Jan 17 01:30:14.041122 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 01:30:14.064192 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:30:14.129879 systemd[1]: Reloading finished in 774 ms. Jan 17 01:30:14.161817 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 01:30:14.163220 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 01:30:14.164372 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 01:30:14.178413 systemd[1]: Starting ensure-sysext.service... Jan 17 01:30:14.181372 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 01:30:14.185524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 01:30:14.197246 systemd[1]: Reloading requested from client PID 1302 ('systemctl') (unit ensure-sysext.service)... Jan 17 01:30:14.197272 systemd[1]: Reloading... Jan 17 01:30:14.244243 systemd-udevd[1304]: Using default interface naming scheme 'v255'. Jan 17 01:30:14.260121 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 01:30:14.262212 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 01:30:14.263990 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 01:30:14.264886 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jan 17 01:30:14.265122 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jan 17 01:30:14.273745 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 01:30:14.274366 systemd-tmpfiles[1303]: Skipping /boot Jan 17 01:30:14.305896 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 01:30:14.305914 systemd-tmpfiles[1303]: Skipping /boot Jan 17 01:30:14.318164 zram_generator::config[1329]: No configuration found. Jan 17 01:30:14.505210 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1337) Jan 17 01:30:14.577813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:30:14.662430 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 01:30:14.684193 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 01:30:14.690486 kernel: ACPI: button: Power Button [PWRF] Jan 17 01:30:14.696500 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 01:30:14.696869 systemd[1]: Reloading finished in 499 ms. Jan 17 01:30:14.719261 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 01:30:14.727208 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 01:30:14.772935 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 01:30:14.774913 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:30:14.780429 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 01:30:14.789036 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 01:30:14.795057 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 01:30:14.795134 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 01:30:14.795523 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 01:30:14.794494 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 01:30:14.800680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:30:14.811511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 01:30:14.816503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 01:30:14.820465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 01:30:14.821371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:30:14.826456 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 01:30:14.833502 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 01:30:14.847476 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 01:30:14.882300 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 01:30:14.892453 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 01:30:14.895267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:30:14.904631 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:30:14.904894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:30:14.905245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:30:14.918613 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 01:30:14.921217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:30:14.932434 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:30:14.932766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:30:14.944580 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 01:30:14.945503 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:30:14.945687 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:30:14.948280 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 01:30:14.951445 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 01:30:14.951666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 01:30:14.959512 systemd[1]: Finished ensure-sysext.service. Jan 17 01:30:15.001499 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 01:30:15.024030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:30:15.028198 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 01:30:15.030928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 01:30:15.031455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 01:30:15.036885 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 01:30:15.043022 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 01:30:15.044249 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 01:30:15.073918 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 01:30:15.075249 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 01:30:15.084723 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 01:30:15.084820 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 01:30:15.084858 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 01:30:15.107529 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 01:30:15.123351 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 01:30:15.269284 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 01:30:15.281346 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 01:30:15.309858 augenrules[1460]: No rules Jan 17 01:30:15.314588 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 01:30:15.366223 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 01:30:15.422667 systemd-networkd[1418]: lo: Link UP Jan 17 01:30:15.422681 systemd-networkd[1418]: lo: Gained carrier Jan 17 01:30:15.426984 systemd-networkd[1418]: Enumeration completed Jan 17 01:30:15.428590 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:30:15.428603 systemd-networkd[1418]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 01:30:15.430841 systemd-networkd[1418]: eth0: Link UP Jan 17 01:30:15.430853 systemd-networkd[1418]: eth0: Gained carrier Jan 17 01:30:15.430871 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:30:15.452777 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 01:30:15.458309 systemd-resolved[1419]: Positive Trust Anchors: Jan 17 01:30:15.458562 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 01:30:15.458606 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 01:30:15.462722 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 01:30:15.470175 systemd-resolved[1419]: Using system hostname 'srv-dv3jc.gb1.brightbox.com'. Jan 17 01:30:15.473247 systemd-networkd[1418]: eth0: DHCPv4 address 10.243.73.142/30, gateway 10.243.73.141 acquired from 10.243.73.141 Jan 17 01:30:15.474511 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 01:30:15.475233 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Jan 17 01:30:15.475603 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 01:30:15.477455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 01:30:15.479473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:30:15.481451 systemd[1]: Reached target network.target - Network. Jan 17 01:30:15.482538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 01:30:15.483357 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 01:30:15.499734 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 01:30:15.534561 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 01:30:15.535768 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 01:30:15.536523 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 01:30:15.537501 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 01:30:15.538321 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 01:30:15.539363 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 01:30:15.540225 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 01:30:15.540992 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 01:30:15.541745 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 01:30:15.541796 systemd[1]: Reached target paths.target - Path Units. Jan 17 01:30:15.542445 systemd[1]: Reached target timers.target - Timer Units. Jan 17 01:30:15.544460 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 01:30:15.547494 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 01:30:15.553577 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 01:30:15.556324 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 01:30:15.557683 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 01:30:15.558568 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 01:30:15.559262 systemd[1]: Reached target basic.target - Basic System. Jan 17 01:30:15.559950 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 01:30:15.559995 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 01:30:15.573367 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 01:30:15.578365 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 01:30:15.581478 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 01:30:15.587389 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 01:30:15.594297 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 01:30:15.598800 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 01:30:15.600284 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 01:30:15.604323 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 01:30:15.608903 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 01:30:15.615323 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 01:30:15.624390 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 01:30:15.635769 jq[1481]: false Jan 17 01:30:15.638484 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 01:30:15.640012 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 01:30:15.641960 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 01:30:15.644501 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 01:30:15.652312 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 01:30:15.657117 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 01:30:15.663664 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 01:30:15.664223 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 01:30:15.690306 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 01:30:15.690593 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 01:30:15.699708 jq[1491]: true Jan 17 01:30:16.292244 systemd-resolved[1419]: Clock change detected. Flushing caches. Jan 17 01:30:16.292429 systemd-timesyncd[1427]: Contacted time server 51.89.151.183:123 (0.flatcar.pool.ntp.org). Jan 17 01:30:16.292504 systemd-timesyncd[1427]: Initial clock synchronization to Sat 2026-01-17 01:30:16.292170 UTC. Jan 17 01:30:16.305762 dbus-daemon[1480]: [system] SELinux support is enabled Jan 17 01:30:16.306023 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 01:30:16.311406 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 01:30:16.311468 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 01:30:16.312332 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 01:30:16.312361 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 01:30:16.323425 extend-filesystems[1482]: Found loop4 Jan 17 01:30:16.323425 extend-filesystems[1482]: Found loop5 Jan 17 01:30:16.325395 dbus-daemon[1480]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1418 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 01:30:16.333301 update_engine[1490]: I20260117 01:30:16.333190 1490 main.cc:92] Flatcar Update Engine starting Jan 17 01:30:16.339801 extend-filesystems[1482]: Found loop6 Jan 17 01:30:16.339801 extend-filesystems[1482]: Found loop7 Jan 17 01:30:16.339801 extend-filesystems[1482]: Found vda Jan 17 01:30:16.339801 extend-filesystems[1482]: Found vda1 Jan 17 01:30:16.339801 extend-filesystems[1482]: Found vda2 Jan 17 01:30:16.339801 extend-filesystems[1482]: Found vda3 Jan 17 01:30:16.339801 extend-filesystems[1482]: Found usr Jan 17 01:30:16.339801 extend-filesystems[1482]: Found vda4 Jan 17 01:30:16.339801 extend-filesystems[1482]: Found vda6 Jan 17 01:30:16.339801 extend-filesystems[1482]: Found vda7 Jan 17 01:30:16.339801 extend-filesystems[1482]: Found vda9 Jan 17 01:30:16.339801 extend-filesystems[1482]: Checking size of /dev/vda9 Jan 17 01:30:16.338253 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 01:30:16.424631 extend-filesystems[1482]: Resized partition /dev/vda9 Jan 17 01:30:16.431375 update_engine[1490]: I20260117 01:30:16.351366 1490 update_check_scheduler.cc:74] Next update check in 11m34s Jan 17 01:30:16.431433 tar[1499]: linux-amd64/LICENSE Jan 17 01:30:16.431433 tar[1499]: linux-amd64/helm Jan 17 01:30:16.350994 systemd[1]: Started update-engine.service - Update Engine. Jan 17 01:30:16.431957 extend-filesystems[1522]: resize2fs 1.47.1 (20-May-2024) Jan 17 01:30:16.356648 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 01:30:16.444324 jq[1505]: true Jan 17 01:30:16.388157 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 01:30:16.388405 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 01:30:16.402428 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 01:30:16.457915 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 17 01:30:16.555486 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1340) Jan 17 01:30:16.654502 systemd-logind[1489]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 01:30:16.654545 systemd-logind[1489]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 01:30:16.654866 systemd-logind[1489]: New seat seat0. Jan 17 01:30:16.660094 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 01:30:16.762687 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 01:30:16.765667 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 01:30:16.777065 dbus-daemon[1480]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1511 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 01:30:16.827486 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 01:30:16.844788 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 01:30:16.860554 polkitd[1540]: Started polkitd version 121 Jan 17 01:30:16.913750 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 01:30:16.915839 polkitd[1540]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 01:30:16.915961 polkitd[1540]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 01:30:16.920938 polkitd[1540]: Finished loading, compiling and executing 2 rules Jan 17 01:30:16.924836 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 01:30:16.925517 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 01:30:16.926629 polkitd[1540]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 01:30:16.954315 bash[1551]: Updated "/home/core/.ssh/authorized_keys" Jan 17 01:30:16.960121 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 01:30:16.969240 systemd-hostnamed[1511]: Hostname set to (static) Jan 17 01:30:17.022637 systemd[1]: Starting sshkeys.service... Jan 17 01:30:17.087649 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 01:30:17.097954 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 01:30:17.156653 containerd[1507]: time="2026-01-17T01:30:17.156412230Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 01:30:17.216826 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 01:30:17.236096 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 01:30:17.236096 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 01:30:17.236096 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 01:30:17.241575 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Jan 17 01:30:17.238533 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 01:30:17.238859 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 01:30:17.254356 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 01:30:17.271203 containerd[1507]: time="2026-01-17T01:30:17.271103873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:30:17.273805 containerd[1507]: time="2026-01-17T01:30:17.273764997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:30:17.273922 containerd[1507]: time="2026-01-17T01:30:17.273897576Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 01:30:17.274064 containerd[1507]: time="2026-01-17T01:30:17.274036023Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 01:30:17.274454 containerd[1507]: time="2026-01-17T01:30:17.274426423Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 01:30:17.274554 containerd[1507]: time="2026-01-17T01:30:17.274530681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 01:30:17.274741 containerd[1507]: time="2026-01-17T01:30:17.274700588Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:30:17.274830 containerd[1507]: time="2026-01-17T01:30:17.274807202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:30:17.276975 containerd[1507]: time="2026-01-17T01:30:17.276300942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:30:17.276975 containerd[1507]: time="2026-01-17T01:30:17.276331738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 01:30:17.276975 containerd[1507]: time="2026-01-17T01:30:17.276355641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:30:17.276975 containerd[1507]: time="2026-01-17T01:30:17.276371954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 01:30:17.276975 containerd[1507]: time="2026-01-17T01:30:17.276530146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:30:17.276975 containerd[1507]: time="2026-01-17T01:30:17.276923681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:30:17.278283 containerd[1507]: time="2026-01-17T01:30:17.278250237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:30:17.278731 containerd[1507]: time="2026-01-17T01:30:17.278356795Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 01:30:17.278731 containerd[1507]: time="2026-01-17T01:30:17.278543860Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 01:30:17.278731 containerd[1507]: time="2026-01-17T01:30:17.278634566Z" level=info msg="metadata content store policy set" policy=shared Jan 17 01:30:17.283490 containerd[1507]: time="2026-01-17T01:30:17.283460012Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 01:30:17.283661 containerd[1507]: time="2026-01-17T01:30:17.283634480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.284165480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.284223270Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.284248325Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.284443796Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.284817526Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.285003542Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.285030301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.285049366Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.285070366Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.285102345Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.285167927Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.285189987Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.285223691Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 01:30:17.287130 containerd[1507]: time="2026-01-17T01:30:17.285242371Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285265591Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285286760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285327796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285355834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285374924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285393985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285472269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285496827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285514627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285542445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285563950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285586185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285614309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.287577 containerd[1507]: time="2026-01-17T01:30:17.285641470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.285664007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.285692765Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.285752197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.285775197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.285791722Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.285883352Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.285919150Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.285938480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.285959049Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.285974398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.286001505Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.286030845Z" level=info msg="NRI interface is disabled by configuration." Jan 17 01:30:17.288072 containerd[1507]: time="2026-01-17T01:30:17.286054987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 01:30:17.289934 containerd[1507]: time="2026-01-17T01:30:17.289847077Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 01:30:17.290306 containerd[1507]: time="2026-01-17T01:30:17.290279214Z" level=info msg="Connect containerd service" Jan 17 01:30:17.290441 containerd[1507]: time="2026-01-17T01:30:17.290415263Z" level=info msg="using legacy CRI server" Jan 17 01:30:17.290546 containerd[1507]: time="2026-01-17T01:30:17.290522858Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 01:30:17.290796 containerd[1507]: time="2026-01-17T01:30:17.290770928Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 01:30:17.291852 containerd[1507]: time="2026-01-17T01:30:17.291811445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 01:30:17.299132 containerd[1507]: time="2026-01-17T01:30:17.293769679Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 01:30:17.299132 containerd[1507]: time="2026-01-17T01:30:17.293891840Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 01:30:17.299132 containerd[1507]: time="2026-01-17T01:30:17.293973508Z" level=info msg="Start subscribing containerd event" Jan 17 01:30:17.299132 containerd[1507]: time="2026-01-17T01:30:17.294066663Z" level=info msg="Start recovering state" Jan 17 01:30:17.299132 containerd[1507]: time="2026-01-17T01:30:17.294205141Z" level=info msg="Start event monitor" Jan 17 01:30:17.299132 containerd[1507]: time="2026-01-17T01:30:17.294236200Z" level=info msg="Start snapshots syncer" Jan 17 01:30:17.299132 containerd[1507]: time="2026-01-17T01:30:17.294267135Z" level=info msg="Start cni network conf syncer for default" Jan 17 01:30:17.299132 containerd[1507]: time="2026-01-17T01:30:17.294291257Z" level=info msg="Start streaming server" Jan 17 01:30:17.294527 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 01:30:17.300178 containerd[1507]: time="2026-01-17T01:30:17.299736108Z" level=info msg="containerd successfully booted in 0.145761s" Jan 17 01:30:17.325306 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 01:30:17.335205 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 01:30:17.346605 systemd[1]: Started sshd@0-10.243.73.142:22-20.161.92.111:59546.service - OpenSSH per-connection server daemon (20.161.92.111:59546). Jan 17 01:30:17.356979 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 01:30:17.357264 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 01:30:17.367533 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 01:30:17.401916 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 01:30:17.412304 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 01:30:17.420568 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 01:30:17.421672 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 01:30:17.502624 systemd-networkd[1418]: eth0: Gained IPv6LL Jan 17 01:30:17.510873 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 01:30:17.520345 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 01:30:17.545521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:30:17.558527 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 01:30:17.603205 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 01:30:17.957713 tar[1499]: linux-amd64/README.md Jan 17 01:30:17.973403 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 01:30:18.006070 sshd[1577]: Accepted publickey for core from 20.161.92.111 port 59546 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:30:18.010623 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:30:18.031241 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 01:30:18.043697 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 01:30:18.050296 systemd-logind[1489]: New session 1 of user core. Jan 17 01:30:18.076233 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 01:30:18.090687 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 01:30:18.106132 (systemd)[1602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 01:30:18.259011 systemd[1602]: Queued start job for default target default.target. Jan 17 01:30:18.265141 systemd[1602]: Created slice app.slice - User Application Slice. Jan 17 01:30:18.265276 systemd[1602]: Reached target paths.target - Paths. Jan 17 01:30:18.265306 systemd[1602]: Reached target timers.target - Timers. Jan 17 01:30:18.269259 systemd[1602]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 01:30:18.286198 systemd[1602]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 01:30:18.286288 systemd[1602]: Reached target sockets.target - Sockets. Jan 17 01:30:18.286312 systemd[1602]: Reached target basic.target - Basic System. Jan 17 01:30:18.286375 systemd[1602]: Reached target default.target - Main User Target. Jan 17 01:30:18.286429 systemd[1602]: Startup finished in 167ms. Jan 17 01:30:18.287058 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 01:30:18.294540 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 01:30:18.512404 systemd-networkd[1418]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d263:24:19ff:fef3:498e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d263:24:19ff:fef3:498e/64 assigned by NDisc. Jan 17 01:30:18.512417 systemd-networkd[1418]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 01:30:18.715581 systemd[1]: Started sshd@1-10.243.73.142:22-20.161.92.111:59556.service - OpenSSH per-connection server daemon (20.161.92.111:59556). Jan 17 01:30:19.310143 sshd[1615]: Accepted publickey for core from 20.161.92.111 port 59556 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:30:19.312193 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:30:19.319225 systemd-logind[1489]: New session 2 of user core. Jan 17 01:30:19.336015 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 01:30:19.364818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:30:19.382987 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:30:19.746123 sshd[1615]: pam_unix(sshd:session): session closed for user core Jan 17 01:30:19.752161 systemd[1]: sshd@1-10.243.73.142:22-20.161.92.111:59556.service: Deactivated successfully. Jan 17 01:30:19.754957 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 01:30:19.757278 systemd-logind[1489]: Session 2 logged out. Waiting for processes to exit. Jan 17 01:30:19.759214 systemd-logind[1489]: Removed session 2. Jan 17 01:30:19.848541 systemd[1]: Started sshd@2-10.243.73.142:22-20.161.92.111:59562.service - OpenSSH per-connection server daemon (20.161.92.111:59562). Jan 17 01:30:20.252261 kubelet[1624]: E0117 01:30:20.252191 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:30:20.255960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:30:20.256297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:30:20.256997 systemd[1]: kubelet.service: Consumed 1.634s CPU time. Jan 17 01:30:20.433898 sshd[1634]: Accepted publickey for core from 20.161.92.111 port 59562 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:30:20.436093 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:30:20.443376 systemd-logind[1489]: New session 3 of user core. Jan 17 01:30:20.453444 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 01:30:20.839231 sshd[1634]: pam_unix(sshd:session): session closed for user core Jan 17 01:30:20.843249 systemd[1]: sshd@2-10.243.73.142:22-20.161.92.111:59562.service: Deactivated successfully. Jan 17 01:30:20.845714 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 01:30:20.847659 systemd-logind[1489]: Session 3 logged out. Waiting for processes to exit. Jan 17 01:30:20.849168 systemd-logind[1489]: Removed session 3. Jan 17 01:30:22.501557 login[1584]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 01:30:22.503591 login[1585]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 01:30:22.511369 systemd-logind[1489]: New session 5 of user core. Jan 17 01:30:22.520490 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 01:30:22.524550 systemd-logind[1489]: New session 4 of user core. Jan 17 01:30:22.534607 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 01:30:23.277743 coreos-metadata[1479]: Jan 17 01:30:23.277 WARN failed to locate config-drive, using the metadata service API instead Jan 17 01:30:23.303633 coreos-metadata[1479]: Jan 17 01:30:23.303 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 17 01:30:23.310480 coreos-metadata[1479]: Jan 17 01:30:23.310 INFO Fetch failed with 404: resource not found Jan 17 01:30:23.310480 coreos-metadata[1479]: Jan 17 01:30:23.310 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 01:30:23.311124 coreos-metadata[1479]: Jan 17 01:30:23.311 INFO Fetch successful Jan 17 01:30:23.311295 coreos-metadata[1479]: Jan 17 01:30:23.311 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 17 01:30:23.326221 coreos-metadata[1479]: Jan 17 01:30:23.326 INFO Fetch successful Jan 17 01:30:23.326388 coreos-metadata[1479]: Jan 17 01:30:23.326 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 17 01:30:23.342119 coreos-metadata[1479]: Jan 17 01:30:23.342 INFO Fetch successful Jan 17 01:30:23.342280 coreos-metadata[1479]: Jan 17 01:30:23.342 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 17 01:30:23.358766 coreos-metadata[1479]: Jan 17 01:30:23.358 INFO Fetch successful Jan 17 01:30:23.358927 coreos-metadata[1479]: Jan 17 01:30:23.358 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 17 01:30:23.381277 coreos-metadata[1479]: Jan 17 01:30:23.381 INFO Fetch successful Jan 17 01:30:23.403891 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 01:30:23.406376 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 01:30:24.333504 coreos-metadata[1559]: Jan 17 01:30:24.333 WARN failed to locate config-drive, using the metadata service API instead Jan 17 01:30:24.355195 coreos-metadata[1559]: Jan 17 01:30:24.355 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 17 01:30:24.379589 coreos-metadata[1559]: Jan 17 01:30:24.379 INFO Fetch successful Jan 17 01:30:24.379796 coreos-metadata[1559]: Jan 17 01:30:24.379 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 01:30:24.407486 coreos-metadata[1559]: Jan 17 01:30:24.407 INFO Fetch successful Jan 17 01:30:24.409694 unknown[1559]: wrote ssh authorized keys file for user: core Jan 17 01:30:24.440177 update-ssh-keys[1677]: Updated "/home/core/.ssh/authorized_keys" Jan 17 01:30:24.441257 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 01:30:24.443903 systemd[1]: Finished sshkeys.service. Jan 17 01:30:24.447284 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 01:30:24.447742 systemd[1]: Startup finished in 1.679s (kernel) + 13.725s (initrd) + 12.442s (userspace) = 27.847s. Jan 17 01:30:30.427359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 01:30:30.435476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:30:30.648364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:30:30.649779 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:30:30.749404 kubelet[1689]: E0117 01:30:30.749209 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:30:30.754634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:30:30.754897 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:30:30.945004 systemd[1]: Started sshd@3-10.243.73.142:22-20.161.92.111:56228.service - OpenSSH per-connection server daemon (20.161.92.111:56228). Jan 17 01:30:31.516382 sshd[1697]: Accepted publickey for core from 20.161.92.111 port 56228 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:30:31.518431 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:30:31.524285 systemd-logind[1489]: New session 6 of user core. Jan 17 01:30:31.534394 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 01:30:31.919269 sshd[1697]: pam_unix(sshd:session): session closed for user core Jan 17 01:30:31.924518 systemd[1]: sshd@3-10.243.73.142:22-20.161.92.111:56228.service: Deactivated successfully. Jan 17 01:30:31.926925 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 01:30:31.927902 systemd-logind[1489]: Session 6 logged out. Waiting for processes to exit. Jan 17 01:30:31.929286 systemd-logind[1489]: Removed session 6. Jan 17 01:30:32.032521 systemd[1]: Started sshd@4-10.243.73.142:22-20.161.92.111:56236.service - OpenSSH per-connection server daemon (20.161.92.111:56236). Jan 17 01:30:32.591946 sshd[1704]: Accepted publickey for core from 20.161.92.111 port 56236 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:30:32.594031 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:30:32.600384 systemd-logind[1489]: New session 7 of user core. Jan 17 01:30:32.608380 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 01:30:32.992456 sshd[1704]: pam_unix(sshd:session): session closed for user core Jan 17 01:30:32.997258 systemd[1]: sshd@4-10.243.73.142:22-20.161.92.111:56236.service: Deactivated successfully. Jan 17 01:30:32.999261 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 01:30:33.000174 systemd-logind[1489]: Session 7 logged out. Waiting for processes to exit. Jan 17 01:30:33.002015 systemd-logind[1489]: Removed session 7. Jan 17 01:30:33.100675 systemd[1]: Started sshd@5-10.243.73.142:22-20.161.92.111:56012.service - OpenSSH per-connection server daemon (20.161.92.111:56012). Jan 17 01:30:33.658803 sshd[1711]: Accepted publickey for core from 20.161.92.111 port 56012 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:30:33.660835 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:30:33.667954 systemd-logind[1489]: New session 8 of user core. Jan 17 01:30:33.681582 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 01:30:34.061665 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 17 01:30:34.065528 systemd-logind[1489]: Session 8 logged out. Waiting for processes to exit. Jan 17 01:30:34.066146 systemd[1]: sshd@5-10.243.73.142:22-20.161.92.111:56012.service: Deactivated successfully. Jan 17 01:30:34.068049 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 01:30:34.069881 systemd-logind[1489]: Removed session 8. Jan 17 01:30:34.172645 systemd[1]: Started sshd@6-10.243.73.142:22-20.161.92.111:56028.service - OpenSSH per-connection server daemon (20.161.92.111:56028). Jan 17 01:30:34.734643 sshd[1718]: Accepted publickey for core from 20.161.92.111 port 56028 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:30:34.736642 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:30:34.742995 systemd-logind[1489]: New session 9 of user core. Jan 17 01:30:34.750377 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 01:30:35.062069 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 01:30:35.062562 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:30:35.076803 sudo[1721]: pam_unix(sudo:session): session closed for user root Jan 17 01:30:35.167472 sshd[1718]: pam_unix(sshd:session): session closed for user core Jan 17 01:30:35.171502 systemd[1]: sshd@6-10.243.73.142:22-20.161.92.111:56028.service: Deactivated successfully. Jan 17 01:30:35.173704 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 01:30:35.175692 systemd-logind[1489]: Session 9 logged out. Waiting for processes to exit. Jan 17 01:30:35.177164 systemd-logind[1489]: Removed session 9. Jan 17 01:30:35.274481 systemd[1]: Started sshd@7-10.243.73.142:22-20.161.92.111:56040.service - OpenSSH per-connection server daemon (20.161.92.111:56040). Jan 17 01:30:35.838136 sshd[1726]: Accepted publickey for core from 20.161.92.111 port 56040 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:30:35.840268 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:30:35.848078 systemd-logind[1489]: New session 10 of user core. Jan 17 01:30:35.855471 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 01:30:36.153483 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 01:30:36.153923 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:30:36.160531 sudo[1730]: pam_unix(sudo:session): session closed for user root Jan 17 01:30:36.168581 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 01:30:36.169007 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:30:36.191580 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 01:30:36.194407 auditctl[1733]: No rules Jan 17 01:30:36.195879 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 01:30:36.196219 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 01:30:36.199005 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 01:30:36.247876 augenrules[1751]: No rules Jan 17 01:30:36.248837 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 01:30:36.250866 sudo[1729]: pam_unix(sudo:session): session closed for user root Jan 17 01:30:36.340921 sshd[1726]: pam_unix(sshd:session): session closed for user core Jan 17 01:30:36.344863 systemd-logind[1489]: Session 10 logged out. Waiting for processes to exit. Jan 17 01:30:36.345287 systemd[1]: sshd@7-10.243.73.142:22-20.161.92.111:56040.service: Deactivated successfully. Jan 17 01:30:36.347277 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 01:30:36.349096 systemd-logind[1489]: Removed session 10. Jan 17 01:30:36.449563 systemd[1]: Started sshd@8-10.243.73.142:22-20.161.92.111:56052.service - OpenSSH per-connection server daemon (20.161.92.111:56052). Jan 17 01:30:37.009098 sshd[1759]: Accepted publickey for core from 20.161.92.111 port 56052 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:30:37.011070 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:30:37.016643 systemd-logind[1489]: New session 11 of user core. Jan 17 01:30:37.028631 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 01:30:37.324276 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 01:30:37.324757 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:30:37.908460 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 01:30:37.910192 (dockerd)[1777]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 01:30:38.511026 dockerd[1777]: time="2026-01-17T01:30:38.510951007Z" level=info msg="Starting up" Jan 17 01:30:38.691508 dockerd[1777]: time="2026-01-17T01:30:38.691422919Z" level=info msg="Loading containers: start." Jan 17 01:30:38.860203 kernel: Initializing XFRM netlink socket Jan 17 01:30:38.967901 systemd-networkd[1418]: docker0: Link UP Jan 17 01:30:38.993453 dockerd[1777]: time="2026-01-17T01:30:38.993393942Z" level=info msg="Loading containers: done." Jan 17 01:30:39.037967 dockerd[1777]: time="2026-01-17T01:30:39.037895661Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 01:30:39.038186 dockerd[1777]: time="2026-01-17T01:30:39.038043166Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 01:30:39.038264 dockerd[1777]: time="2026-01-17T01:30:39.038201784Z" level=info msg="Daemon has completed initialization" Jan 17 01:30:39.039430 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1381362366-merged.mount: Deactivated successfully. Jan 17 01:30:39.099923 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 01:30:39.101335 dockerd[1777]: time="2026-01-17T01:30:39.099955437Z" level=info msg="API listen on /run/docker.sock" Jan 17 01:30:40.333645 containerd[1507]: time="2026-01-17T01:30:40.333485286Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 01:30:40.927204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 01:30:40.947447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:30:41.203444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123915642.mount: Deactivated successfully. Jan 17 01:30:41.343373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:30:41.353567 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:30:41.468816 kubelet[1935]: E0117 01:30:41.468357 1935 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:30:41.471494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:30:41.471709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:30:43.637571 containerd[1507]: time="2026-01-17T01:30:43.637451447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:43.644672 containerd[1507]: time="2026-01-17T01:30:43.643940214Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:43.644672 containerd[1507]: time="2026-01-17T01:30:43.644257100Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 17 01:30:43.648350 containerd[1507]: time="2026-01-17T01:30:43.648316010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:43.650822 containerd[1507]: time="2026-01-17T01:30:43.650296115Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.316675551s" Jan 17 01:30:43.650822 containerd[1507]: time="2026-01-17T01:30:43.650368605Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 01:30:43.651421 containerd[1507]: time="2026-01-17T01:30:43.651392282Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 01:30:46.259570 containerd[1507]: time="2026-01-17T01:30:46.259433756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:46.261721 containerd[1507]: time="2026-01-17T01:30:46.261364805Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 17 01:30:46.262608 containerd[1507]: time="2026-01-17T01:30:46.262569570Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:46.277585 containerd[1507]: time="2026-01-17T01:30:46.277533145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:46.279395 containerd[1507]: time="2026-01-17T01:30:46.279351596Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.627822027s" Jan 17 01:30:46.279808 containerd[1507]: time="2026-01-17T01:30:46.279513428Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 01:30:46.280624 containerd[1507]: time="2026-01-17T01:30:46.280593426Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 01:30:48.296155 containerd[1507]: time="2026-01-17T01:30:48.295724577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:48.298959 containerd[1507]: time="2026-01-17T01:30:48.298900865Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 17 01:30:48.300339 containerd[1507]: time="2026-01-17T01:30:48.300263024Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:48.304139 containerd[1507]: time="2026-01-17T01:30:48.303927024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:48.307428 containerd[1507]: time="2026-01-17T01:30:48.305650774Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.025013801s" Jan 17 01:30:48.307428 containerd[1507]: time="2026-01-17T01:30:48.305701763Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 01:30:48.307633 containerd[1507]: time="2026-01-17T01:30:48.307452326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 01:30:48.536908 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 01:30:50.001770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2667881473.mount: Deactivated successfully. Jan 17 01:30:50.898561 containerd[1507]: time="2026-01-17T01:30:50.898455490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:50.900068 containerd[1507]: time="2026-01-17T01:30:50.899858436Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 17 01:30:50.900783 containerd[1507]: time="2026-01-17T01:30:50.900744384Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:50.903615 containerd[1507]: time="2026-01-17T01:30:50.903571312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:50.904627 containerd[1507]: time="2026-01-17T01:30:50.904592085Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.597099984s" Jan 17 01:30:50.904852 containerd[1507]: time="2026-01-17T01:30:50.904724269Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 01:30:50.905707 containerd[1507]: time="2026-01-17T01:30:50.905677062Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 01:30:51.503688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1589470129.mount: Deactivated successfully. Jan 17 01:30:51.506294 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 01:30:51.514471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:30:51.734397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:30:51.737286 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:30:51.829032 kubelet[2035]: E0117 01:30:51.827608 2035 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:30:51.831091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:30:51.831397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:30:53.181352 containerd[1507]: time="2026-01-17T01:30:53.180495262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:53.183239 containerd[1507]: time="2026-01-17T01:30:53.183161987Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 17 01:30:53.183895 containerd[1507]: time="2026-01-17T01:30:53.183859999Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:53.189217 containerd[1507]: time="2026-01-17T01:30:53.189124798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:53.191315 containerd[1507]: time="2026-01-17T01:30:53.190825088Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.285103897s" Jan 17 01:30:53.191315 containerd[1507]: time="2026-01-17T01:30:53.190887689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 01:30:53.192792 containerd[1507]: time="2026-01-17T01:30:53.192319757Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 01:30:53.788848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833190495.mount: Deactivated successfully. Jan 17 01:30:53.792657 containerd[1507]: time="2026-01-17T01:30:53.792598737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:53.794346 containerd[1507]: time="2026-01-17T01:30:53.794300494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 17 01:30:53.795502 containerd[1507]: time="2026-01-17T01:30:53.795444249Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:53.798691 containerd[1507]: time="2026-01-17T01:30:53.798654572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:53.800624 containerd[1507]: time="2026-01-17T01:30:53.800426749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.494556ms" Jan 17 01:30:53.800624 containerd[1507]: time="2026-01-17T01:30:53.800473509Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 01:30:53.801822 containerd[1507]: time="2026-01-17T01:30:53.801630261Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 01:30:54.410606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339762773.mount: Deactivated successfully. Jan 17 01:30:57.773959 containerd[1507]: time="2026-01-17T01:30:57.773888501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:57.776483 containerd[1507]: time="2026-01-17T01:30:57.776427761Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 17 01:30:57.777715 containerd[1507]: time="2026-01-17T01:30:57.777658361Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:57.783151 containerd[1507]: time="2026-01-17T01:30:57.782331762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:30:57.784817 containerd[1507]: time="2026-01-17T01:30:57.784361886Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.982690593s" Jan 17 01:30:57.784817 containerd[1507]: time="2026-01-17T01:30:57.784410871Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 01:31:01.613785 update_engine[1490]: I20260117 01:31:01.613343 1490 update_attempter.cc:509] Updating boot flags... Jan 17 01:31:01.770156 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2167) Jan 17 01:31:01.838705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 01:31:01.850385 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2166) Jan 17 01:31:01.849547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:31:01.964163 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2166) Jan 17 01:31:02.453450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:31:02.463744 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:31:02.532177 kubelet[2183]: E0117 01:31:02.532090 2183 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:31:02.535506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:31:02.535749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:31:02.642549 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:31:02.651559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:31:02.692570 systemd[1]: Reloading requested from client PID 2198 ('systemctl') (unit session-11.scope)... Jan 17 01:31:02.692867 systemd[1]: Reloading... Jan 17 01:31:02.894542 zram_generator::config[2246]: No configuration found. Jan 17 01:31:03.035126 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:31:03.141411 systemd[1]: Reloading finished in 447 ms. Jan 17 01:31:03.222422 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 01:31:03.222585 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 01:31:03.223182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:31:03.231704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:31:03.384496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:31:03.397861 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 01:31:03.523210 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 01:31:03.523210 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 01:31:03.523210 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 01:31:03.524681 kubelet[2305]: I0117 01:31:03.524578 2305 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 01:31:04.503912 kubelet[2305]: I0117 01:31:04.503830 2305 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 01:31:04.503912 kubelet[2305]: I0117 01:31:04.503885 2305 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 01:31:04.504347 kubelet[2305]: I0117 01:31:04.504313 2305 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 01:31:04.541169 kubelet[2305]: I0117 01:31:04.539832 2305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 01:31:04.541805 kubelet[2305]: E0117 01:31:04.541586 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.73.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.73.142:6443: connect: connection refused" logger="UnhandledError" Jan 17 01:31:04.554662 kubelet[2305]: E0117 01:31:04.554629 2305 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 01:31:04.554903 kubelet[2305]: I0117 01:31:04.554883 2305 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 01:31:04.563085 kubelet[2305]: I0117 01:31:04.563054 2305 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 01:31:04.567796 kubelet[2305]: I0117 01:31:04.567738 2305 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 01:31:04.568284 kubelet[2305]: I0117 01:31:04.567915 2305 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-dv3jc.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 01:31:04.570316 kubelet[2305]: I0117 01:31:04.570293 2305 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 01:31:04.570435 kubelet[2305]: I0117 01:31:04.570417 2305 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 01:31:04.571861 kubelet[2305]: I0117 01:31:04.571827 2305 state_mem.go:36] "Initialized new in-memory state store" Jan 17 01:31:04.577079 kubelet[2305]: I0117 01:31:04.576913 2305 kubelet.go:446] "Attempting to sync node with API server" Jan 17 01:31:04.577079 kubelet[2305]: I0117 01:31:04.576978 2305 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 01:31:04.577079 kubelet[2305]: I0117 01:31:04.577024 2305 kubelet.go:352] "Adding apiserver pod source" Jan 17 01:31:04.577079 kubelet[2305]: I0117 01:31:04.577049 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 01:31:04.589156 kubelet[2305]: W0117 01:31:04.588601 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.73.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-dv3jc.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.73.142:6443: connect: connection refused Jan 17 01:31:04.589156 kubelet[2305]: E0117 01:31:04.588698 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.73.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-dv3jc.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.73.142:6443: connect: connection refused" logger="UnhandledError" Jan 17 01:31:04.589156 kubelet[2305]: I0117 01:31:04.588929 2305 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 01:31:04.592681 kubelet[2305]: I0117 01:31:04.592656 2305 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 01:31:04.592917 kubelet[2305]: W0117 01:31:04.592898 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 01:31:04.597500 kubelet[2305]: W0117 01:31:04.597451 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.73.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.73.142:6443: connect: connection refused Jan 17 01:31:04.597611 kubelet[2305]: E0117 01:31:04.597511 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.73.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.73.142:6443: connect: connection refused" logger="UnhandledError" Jan 17 01:31:04.598768 kubelet[2305]: I0117 01:31:04.598734 2305 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 01:31:04.598886 kubelet[2305]: I0117 01:31:04.598794 2305 server.go:1287] "Started kubelet" Jan 17 01:31:04.602991 kubelet[2305]: I0117 01:31:04.602892 2305 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 01:31:04.604846 kubelet[2305]: I0117 01:31:04.604822 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 01:31:04.605431 kubelet[2305]: I0117 01:31:04.605390 2305 server.go:479] "Adding debug handlers to kubelet server" Jan 17 01:31:04.607995 kubelet[2305]: I0117 01:31:04.607497 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 01:31:04.607995 kubelet[2305]: I0117 01:31:04.607868 2305 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 01:31:04.613536 kubelet[2305]: I0117 01:31:04.613499 2305 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 01:31:04.614337 kubelet[2305]: E0117 01:31:04.613817 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" Jan 17 01:31:04.614337 kubelet[2305]: I0117 01:31:04.614333 2305 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 01:31:04.614479 kubelet[2305]: I0117 01:31:04.614417 2305 reconciler.go:26] "Reconciler: start to sync state" Jan 17 01:31:04.619136 kubelet[2305]: I0117 01:31:04.617908 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 01:31:04.624519 kubelet[2305]: E0117 01:31:04.621813 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.73.142:6443/api/v1/namespaces/default/events\": dial tcp 10.243.73.142:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-dv3jc.gb1.brightbox.com.188b60935661df3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-dv3jc.gb1.brightbox.com,UID:srv-dv3jc.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-dv3jc.gb1.brightbox.com,},FirstTimestamp:2026-01-17 01:31:04.598765374 +0000 UTC m=+1.194089937,LastTimestamp:2026-01-17 01:31:04.598765374 +0000 UTC m=+1.194089937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-dv3jc.gb1.brightbox.com,}" Jan 17 01:31:04.624882 kubelet[2305]: W0117 01:31:04.624837 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.73.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.73.142:6443: connect: connection refused Jan 17 01:31:04.625151 kubelet[2305]: E0117 01:31:04.625121 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.73.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.73.142:6443: connect: connection refused" logger="UnhandledError" Jan 17 01:31:04.625387 kubelet[2305]: E0117 01:31:04.625353 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.73.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-dv3jc.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.73.142:6443: connect: connection refused" interval="200ms" Jan 17 01:31:04.626184 kubelet[2305]: I0117 01:31:04.626157 2305 factory.go:221] Registration of the systemd container factory successfully Jan 17 01:31:04.626429 kubelet[2305]: I0117 01:31:04.626401 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 01:31:04.630428 kubelet[2305]: E0117 01:31:04.630397 2305 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 01:31:04.632153 kubelet[2305]: I0117 01:31:04.631546 2305 factory.go:221] Registration of the containerd container factory successfully Jan 17 01:31:04.638446 kubelet[2305]: I0117 01:31:04.638390 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 01:31:04.640996 kubelet[2305]: I0117 01:31:04.640463 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 01:31:04.640996 kubelet[2305]: I0117 01:31:04.640508 2305 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 01:31:04.640996 kubelet[2305]: I0117 01:31:04.640557 2305 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 01:31:04.640996 kubelet[2305]: I0117 01:31:04.640570 2305 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 01:31:04.640996 kubelet[2305]: E0117 01:31:04.640649 2305 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 01:31:04.651706 kubelet[2305]: W0117 01:31:04.651659 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.73.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.73.142:6443: connect: connection refused Jan 17 01:31:04.651902 kubelet[2305]: E0117 01:31:04.651862 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.73.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.73.142:6443: connect: connection refused" logger="UnhandledError" Jan 17 01:31:04.669654 kubelet[2305]: I0117 01:31:04.669631 2305 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 01:31:04.669919 kubelet[2305]: I0117 01:31:04.669900 2305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 01:31:04.670031 kubelet[2305]: I0117 01:31:04.670014 2305 state_mem.go:36] "Initialized new in-memory state store" Jan 17 01:31:04.671959 kubelet[2305]: I0117 01:31:04.671939 2305 policy_none.go:49] "None policy: Start" Jan 17 01:31:04.672382 kubelet[2305]: I0117 01:31:04.672076 2305 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 01:31:04.672382 kubelet[2305]: I0117 01:31:04.672135 2305 state_mem.go:35] "Initializing new in-memory state store" Jan 17 01:31:04.679845 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 01:31:04.693290 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 01:31:04.714222 kubelet[2305]: E0117 01:31:04.714165 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" Jan 17 01:31:04.717178 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 01:31:04.720224 kubelet[2305]: I0117 01:31:04.720188 2305 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 01:31:04.720707 kubelet[2305]: I0117 01:31:04.720686 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 01:31:04.720884 kubelet[2305]: I0117 01:31:04.720832 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 01:31:04.722178 kubelet[2305]: I0117 01:31:04.721304 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 01:31:04.722846 kubelet[2305]: E0117 01:31:04.722802 2305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 01:31:04.723071 kubelet[2305]: E0117 01:31:04.723049 2305 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-dv3jc.gb1.brightbox.com\" not found" Jan 17 01:31:04.756010 systemd[1]: Created slice kubepods-burstable-pod9735906e9fc172029b74bec2ef80daf9.slice - libcontainer container kubepods-burstable-pod9735906e9fc172029b74bec2ef80daf9.slice. Jan 17 01:31:04.779808 kubelet[2305]: E0117 01:31:04.779765 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.786532 systemd[1]: Created slice kubepods-burstable-podc7bf2fd6440a942c12fb1a154cb8f83b.slice - libcontainer container kubepods-burstable-podc7bf2fd6440a942c12fb1a154cb8f83b.slice. Jan 17 01:31:04.790745 kubelet[2305]: E0117 01:31:04.789267 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.792067 systemd[1]: Created slice kubepods-burstable-pod719daed51f2353bcb75928d00f132bde.slice - libcontainer container kubepods-burstable-pod719daed51f2353bcb75928d00f132bde.slice. Jan 17 01:31:04.794375 kubelet[2305]: E0117 01:31:04.794338 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.825259 kubelet[2305]: I0117 01:31:04.824826 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.825395 kubelet[2305]: E0117 01:31:04.825336 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.73.142:6443/api/v1/nodes\": dial tcp 10.243.73.142:6443: connect: connection refused" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.826852 kubelet[2305]: E0117 01:31:04.826788 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.73.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-dv3jc.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.73.142:6443: connect: connection refused" interval="400ms" Jan 17 01:31:04.915294 kubelet[2305]: I0117 01:31:04.915215 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/719daed51f2353bcb75928d00f132bde-ca-certs\") pod \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" (UID: \"719daed51f2353bcb75928d00f132bde\") " pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.915294 kubelet[2305]: I0117 01:31:04.915292 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/719daed51f2353bcb75928d00f132bde-k8s-certs\") pod \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" (UID: \"719daed51f2353bcb75928d00f132bde\") " pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.915583 kubelet[2305]: I0117 01:31:04.915324 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/719daed51f2353bcb75928d00f132bde-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" (UID: \"719daed51f2353bcb75928d00f132bde\") " pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.915583 kubelet[2305]: I0117 01:31:04.915351 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9735906e9fc172029b74bec2ef80daf9-kubeconfig\") pod \"kube-scheduler-srv-dv3jc.gb1.brightbox.com\" (UID: \"9735906e9fc172029b74bec2ef80daf9\") " pod="kube-system/kube-scheduler-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.915583 kubelet[2305]: I0117 01:31:04.915381 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7bf2fd6440a942c12fb1a154cb8f83b-ca-certs\") pod \"kube-apiserver-srv-dv3jc.gb1.brightbox.com\" (UID: \"c7bf2fd6440a942c12fb1a154cb8f83b\") " pod="kube-system/kube-apiserver-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.915583 kubelet[2305]: I0117 01:31:04.915407 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7bf2fd6440a942c12fb1a154cb8f83b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-dv3jc.gb1.brightbox.com\" (UID: \"c7bf2fd6440a942c12fb1a154cb8f83b\") " pod="kube-system/kube-apiserver-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.915583 kubelet[2305]: I0117 01:31:04.915431 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7bf2fd6440a942c12fb1a154cb8f83b-k8s-certs\") pod \"kube-apiserver-srv-dv3jc.gb1.brightbox.com\" (UID: \"c7bf2fd6440a942c12fb1a154cb8f83b\") " pod="kube-system/kube-apiserver-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.915839 kubelet[2305]: I0117 01:31:04.915455 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/719daed51f2353bcb75928d00f132bde-flexvolume-dir\") pod \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" (UID: \"719daed51f2353bcb75928d00f132bde\") " pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:04.915839 kubelet[2305]: I0117 01:31:04.915483 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/719daed51f2353bcb75928d00f132bde-kubeconfig\") pod \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" (UID: \"719daed51f2353bcb75928d00f132bde\") " pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:05.028679 kubelet[2305]: I0117 01:31:05.028535 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:05.029770 kubelet[2305]: E0117 01:31:05.029648 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.73.142:6443/api/v1/nodes\": dial tcp 10.243.73.142:6443: connect: connection refused" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:05.082713 containerd[1507]: time="2026-01-17T01:31:05.082577174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-dv3jc.gb1.brightbox.com,Uid:9735906e9fc172029b74bec2ef80daf9,Namespace:kube-system,Attempt:0,}" Jan 17 01:31:05.098048 containerd[1507]: time="2026-01-17T01:31:05.097981554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-dv3jc.gb1.brightbox.com,Uid:c7bf2fd6440a942c12fb1a154cb8f83b,Namespace:kube-system,Attempt:0,}" Jan 17 01:31:05.098523 containerd[1507]: time="2026-01-17T01:31:05.098475322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-dv3jc.gb1.brightbox.com,Uid:719daed51f2353bcb75928d00f132bde,Namespace:kube-system,Attempt:0,}" Jan 17 01:31:05.227825 kubelet[2305]: E0117 01:31:05.227757 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.73.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-dv3jc.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.73.142:6443: connect: connection refused" interval="800ms" Jan 17 01:31:05.433102 kubelet[2305]: I0117 01:31:05.432586 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:05.433102 kubelet[2305]: E0117 01:31:05.433061 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.73.142:6443/api/v1/nodes\": dial tcp 10.243.73.142:6443: connect: connection refused" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:05.611705 kubelet[2305]: W0117 01:31:05.611659 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.73.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.73.142:6443: connect: connection refused Jan 17 01:31:05.612948 kubelet[2305]: E0117 01:31:05.612132 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.73.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.73.142:6443: connect: connection refused" logger="UnhandledError" Jan 17 01:31:05.619281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4026918871.mount: Deactivated successfully. Jan 17 01:31:05.628134 containerd[1507]: time="2026-01-17T01:31:05.626180547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:31:05.628134 containerd[1507]: time="2026-01-17T01:31:05.627536314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:31:05.629249 containerd[1507]: time="2026-01-17T01:31:05.629197377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 01:31:05.629597 containerd[1507]: time="2026-01-17T01:31:05.629561383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 01:31:05.629771 containerd[1507]: time="2026-01-17T01:31:05.629741960Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:31:05.630943 containerd[1507]: time="2026-01-17T01:31:05.630897698Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:31:05.631078 containerd[1507]: time="2026-01-17T01:31:05.631044048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 01:31:05.634583 containerd[1507]: time="2026-01-17T01:31:05.634546897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:31:05.637706 containerd[1507]: time="2026-01-17T01:31:05.637658760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 539.572203ms" Jan 17 01:31:05.642954 containerd[1507]: time="2026-01-17T01:31:05.642747419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.296428ms" Jan 17 01:31:05.647553 containerd[1507]: time="2026-01-17T01:31:05.647315738Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 548.73568ms" Jan 17 01:31:05.727746 kubelet[2305]: W0117 01:31:05.727529 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.73.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.73.142:6443: connect: connection refused Jan 17 01:31:05.727746 kubelet[2305]: E0117 01:31:05.727594 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.73.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.73.142:6443: connect: connection refused" logger="UnhandledError" Jan 17 01:31:05.857778 containerd[1507]: time="2026-01-17T01:31:05.857659700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:31:05.858241 containerd[1507]: time="2026-01-17T01:31:05.857737021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:31:05.858241 containerd[1507]: time="2026-01-17T01:31:05.858149602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:05.860308 containerd[1507]: time="2026-01-17T01:31:05.859557180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:05.862519 containerd[1507]: time="2026-01-17T01:31:05.862231826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:31:05.862519 containerd[1507]: time="2026-01-17T01:31:05.862303345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:31:05.862519 containerd[1507]: time="2026-01-17T01:31:05.862333447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:05.862519 containerd[1507]: time="2026-01-17T01:31:05.862447124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:05.869582 containerd[1507]: time="2026-01-17T01:31:05.869212098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:31:05.869582 containerd[1507]: time="2026-01-17T01:31:05.869296844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:31:05.869582 containerd[1507]: time="2026-01-17T01:31:05.869320972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:05.869582 containerd[1507]: time="2026-01-17T01:31:05.869478407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:05.885851 kubelet[2305]: W0117 01:31:05.885335 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.73.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-dv3jc.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.73.142:6443: connect: connection refused Jan 17 01:31:05.885851 kubelet[2305]: E0117 01:31:05.885806 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.73.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-dv3jc.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.73.142:6443: connect: connection refused" logger="UnhandledError" Jan 17 01:31:05.933364 systemd[1]: Started cri-containerd-04b6f79703e5ad60d2e280bc7f2c396dd735e4cdf446fb4481a363d9544da9f0.scope - libcontainer container 04b6f79703e5ad60d2e280bc7f2c396dd735e4cdf446fb4481a363d9544da9f0. Jan 17 01:31:05.936828 systemd[1]: Started cri-containerd-0e1330e56f4ebc10ee4eb663b8382b6044001bb8d111404e2b2473d0e1140c1a.scope - libcontainer container 0e1330e56f4ebc10ee4eb663b8382b6044001bb8d111404e2b2473d0e1140c1a. Jan 17 01:31:05.946176 systemd[1]: Started cri-containerd-1baa88637be731a1f42ac70fd6876b5b1c0bbb51ecec18db5b891afe2d5cc299.scope - libcontainer container 1baa88637be731a1f42ac70fd6876b5b1c0bbb51ecec18db5b891afe2d5cc299. Jan 17 01:31:06.031290 kubelet[2305]: E0117 01:31:06.030650 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.73.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-dv3jc.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.73.142:6443: connect: connection refused" interval="1.6s" Jan 17 01:31:06.065918 containerd[1507]: time="2026-01-17T01:31:06.065432865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-dv3jc.gb1.brightbox.com,Uid:c7bf2fd6440a942c12fb1a154cb8f83b,Namespace:kube-system,Attempt:0,} returns sandbox id \"04b6f79703e5ad60d2e280bc7f2c396dd735e4cdf446fb4481a363d9544da9f0\"" Jan 17 01:31:06.079754 containerd[1507]: time="2026-01-17T01:31:06.079708620Z" level=info msg="CreateContainer within sandbox \"04b6f79703e5ad60d2e280bc7f2c396dd735e4cdf446fb4481a363d9544da9f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 01:31:06.104861 containerd[1507]: time="2026-01-17T01:31:06.104785276Z" level=info msg="CreateContainer within sandbox \"04b6f79703e5ad60d2e280bc7f2c396dd735e4cdf446fb4481a363d9544da9f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4d16714ad775bf56cc66689e012e9f71e326dc9d4077dd5043bc70577f6adc3\"" Jan 17 01:31:06.106340 containerd[1507]: time="2026-01-17T01:31:06.105516281Z" level=info msg="StartContainer for \"d4d16714ad775bf56cc66689e012e9f71e326dc9d4077dd5043bc70577f6adc3\"" Jan 17 01:31:06.113732 containerd[1507]: time="2026-01-17T01:31:06.113686939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-dv3jc.gb1.brightbox.com,Uid:719daed51f2353bcb75928d00f132bde,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e1330e56f4ebc10ee4eb663b8382b6044001bb8d111404e2b2473d0e1140c1a\"" Jan 17 01:31:06.120193 containerd[1507]: time="2026-01-17T01:31:06.120147174Z" level=info msg="CreateContainer within sandbox \"0e1330e56f4ebc10ee4eb663b8382b6044001bb8d111404e2b2473d0e1140c1a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 01:31:06.127594 containerd[1507]: time="2026-01-17T01:31:06.127352912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-dv3jc.gb1.brightbox.com,Uid:9735906e9fc172029b74bec2ef80daf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1baa88637be731a1f42ac70fd6876b5b1c0bbb51ecec18db5b891afe2d5cc299\"" Jan 17 01:31:06.131437 containerd[1507]: time="2026-01-17T01:31:06.131402837Z" level=info msg="CreateContainer within sandbox \"1baa88637be731a1f42ac70fd6876b5b1c0bbb51ecec18db5b891afe2d5cc299\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 01:31:06.135846 kubelet[2305]: W0117 01:31:06.135713 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.73.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.73.142:6443: connect: connection refused Jan 17 01:31:06.135846 kubelet[2305]: E0117 01:31:06.135801 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.73.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.73.142:6443: connect: connection refused" logger="UnhandledError" Jan 17 01:31:06.151616 containerd[1507]: time="2026-01-17T01:31:06.151542523Z" level=info msg="CreateContainer within sandbox \"0e1330e56f4ebc10ee4eb663b8382b6044001bb8d111404e2b2473d0e1140c1a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"728fc4fd11a6965bfcf803ccc27541e0afa40327b9e52f7c8c986cb27ed173e9\"" Jan 17 01:31:06.152404 containerd[1507]: time="2026-01-17T01:31:06.152348314Z" level=info msg="StartContainer for \"728fc4fd11a6965bfcf803ccc27541e0afa40327b9e52f7c8c986cb27ed173e9\"" Jan 17 01:31:06.178455 systemd[1]: Started cri-containerd-d4d16714ad775bf56cc66689e012e9f71e326dc9d4077dd5043bc70577f6adc3.scope - libcontainer container d4d16714ad775bf56cc66689e012e9f71e326dc9d4077dd5043bc70577f6adc3. Jan 17 01:31:06.181736 containerd[1507]: time="2026-01-17T01:31:06.181495226Z" level=info msg="CreateContainer within sandbox \"1baa88637be731a1f42ac70fd6876b5b1c0bbb51ecec18db5b891afe2d5cc299\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1bc3b48cdcf7a9813cfc8ca483ae2915b59e06415a65e30997db966942d25eb8\"" Jan 17 01:31:06.182783 containerd[1507]: time="2026-01-17T01:31:06.182468362Z" level=info msg="StartContainer for \"1bc3b48cdcf7a9813cfc8ca483ae2915b59e06415a65e30997db966942d25eb8\"" Jan 17 01:31:06.206382 systemd[1]: Started cri-containerd-728fc4fd11a6965bfcf803ccc27541e0afa40327b9e52f7c8c986cb27ed173e9.scope - libcontainer container 728fc4fd11a6965bfcf803ccc27541e0afa40327b9e52f7c8c986cb27ed173e9. Jan 17 01:31:06.241799 kubelet[2305]: I0117 01:31:06.241001 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:06.241799 kubelet[2305]: E0117 01:31:06.241503 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.73.142:6443/api/v1/nodes\": dial tcp 10.243.73.142:6443: connect: connection refused" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:06.247412 systemd[1]: Started cri-containerd-1bc3b48cdcf7a9813cfc8ca483ae2915b59e06415a65e30997db966942d25eb8.scope - libcontainer container 1bc3b48cdcf7a9813cfc8ca483ae2915b59e06415a65e30997db966942d25eb8. Jan 17 01:31:06.315604 containerd[1507]: time="2026-01-17T01:31:06.314578949Z" level=info msg="StartContainer for \"d4d16714ad775bf56cc66689e012e9f71e326dc9d4077dd5043bc70577f6adc3\" returns successfully" Jan 17 01:31:06.334377 containerd[1507]: time="2026-01-17T01:31:06.334209900Z" level=info msg="StartContainer for \"728fc4fd11a6965bfcf803ccc27541e0afa40327b9e52f7c8c986cb27ed173e9\" returns successfully" Jan 17 01:31:06.346512 containerd[1507]: time="2026-01-17T01:31:06.346420304Z" level=info msg="StartContainer for \"1bc3b48cdcf7a9813cfc8ca483ae2915b59e06415a65e30997db966942d25eb8\" returns successfully" Jan 17 01:31:06.601189 kubelet[2305]: E0117 01:31:06.600651 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.73.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.73.142:6443: connect: connection refused" logger="UnhandledError" Jan 17 01:31:06.671138 kubelet[2305]: E0117 01:31:06.670845 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:06.681508 kubelet[2305]: E0117 01:31:06.679782 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:06.685124 kubelet[2305]: E0117 01:31:06.685008 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:07.687793 kubelet[2305]: E0117 01:31:07.687731 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:07.688474 kubelet[2305]: E0117 01:31:07.688195 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:07.845209 kubelet[2305]: I0117 01:31:07.844591 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.183936 kubelet[2305]: E0117 01:31:09.183871 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-dv3jc.gb1.brightbox.com\" not found" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.362059 kubelet[2305]: I0117 01:31:09.361994 2305 kubelet_node_status.go:78] "Successfully registered node" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.406367 kubelet[2305]: I0117 01:31:09.406312 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.415253 kubelet[2305]: I0117 01:31:09.415162 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.417048 kubelet[2305]: E0117 01:31:09.416775 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-dv3jc.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.417764 kubelet[2305]: E0117 01:31:09.417729 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-dv3jc.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.417764 kubelet[2305]: I0117 01:31:09.417761 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.420097 kubelet[2305]: E0117 01:31:09.420058 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-dv3jc.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.420221 kubelet[2305]: I0117 01:31:09.420101 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.421920 kubelet[2305]: E0117 01:31:09.421885 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:09.597382 kubelet[2305]: I0117 01:31:09.597099 2305 apiserver.go:52] "Watching apiserver" Jan 17 01:31:09.615358 kubelet[2305]: I0117 01:31:09.615293 2305 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 01:31:11.465824 systemd[1]: Reloading requested from client PID 2579 ('systemctl') (unit session-11.scope)... Jan 17 01:31:11.465852 systemd[1]: Reloading... Jan 17 01:31:11.652237 zram_generator::config[2622]: No configuration found. Jan 17 01:31:11.863964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:31:12.000413 systemd[1]: Reloading finished in 533 ms. Jan 17 01:31:12.078070 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:31:12.096997 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 01:31:12.097694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:31:12.097839 systemd[1]: kubelet.service: Consumed 1.637s CPU time, 130.0M memory peak, 0B memory swap peak. Jan 17 01:31:12.106808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:31:12.389971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:31:12.398501 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 01:31:12.492757 kubelet[2681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 01:31:12.494191 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 01:31:12.494191 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 01:31:12.494191 kubelet[2681]: I0117 01:31:12.493506 2681 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 01:31:12.517164 kubelet[2681]: I0117 01:31:12.517094 2681 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 01:31:12.518868 kubelet[2681]: I0117 01:31:12.518235 2681 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 01:31:12.518868 kubelet[2681]: I0117 01:31:12.518709 2681 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 01:31:12.528179 kubelet[2681]: I0117 01:31:12.528153 2681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 01:31:12.531731 kubelet[2681]: I0117 01:31:12.531697 2681 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 01:31:12.542546 kubelet[2681]: E0117 01:31:12.542494 2681 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 01:31:12.542710 kubelet[2681]: I0117 01:31:12.542688 2681 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 01:31:12.547866 kubelet[2681]: I0117 01:31:12.547845 2681 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 01:31:12.548433 kubelet[2681]: I0117 01:31:12.548378 2681 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 01:31:12.551427 kubelet[2681]: I0117 01:31:12.548544 2681 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-dv3jc.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 01:31:12.551427 kubelet[2681]: I0117 01:31:12.551038 2681 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 01:31:12.551427 kubelet[2681]: I0117 01:31:12.551056 2681 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 01:31:12.551427 kubelet[2681]: I0117 01:31:12.551167 2681 state_mem.go:36] "Initialized new in-memory state store" Jan 17 01:31:12.551879 kubelet[2681]: I0117 01:31:12.551857 2681 kubelet.go:446] "Attempting to sync node with API server" Jan 17 01:31:12.552674 kubelet[2681]: I0117 01:31:12.552643 2681 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 01:31:12.554145 kubelet[2681]: I0117 01:31:12.553278 2681 kubelet.go:352] "Adding apiserver pod source" Jan 17 01:31:12.554145 kubelet[2681]: I0117 01:31:12.553310 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 01:31:12.556246 kubelet[2681]: I0117 01:31:12.556222 2681 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 01:31:12.556923 kubelet[2681]: I0117 01:31:12.556901 2681 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 01:31:12.558801 kubelet[2681]: I0117 01:31:12.558779 2681 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 01:31:12.558967 kubelet[2681]: I0117 01:31:12.558949 2681 server.go:1287] "Started kubelet" Jan 17 01:31:12.588055 kubelet[2681]: I0117 01:31:12.587992 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 01:31:12.594265 kubelet[2681]: I0117 01:31:12.594224 2681 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 01:31:12.608645 kubelet[2681]: I0117 01:31:12.608612 2681 server.go:479] "Adding debug handlers to kubelet server" Jan 17 01:31:12.610423 kubelet[2681]: I0117 01:31:12.610364 2681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 01:31:12.611157 kubelet[2681]: I0117 01:31:12.611069 2681 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 01:31:12.611942 kubelet[2681]: E0117 01:31:12.611650 2681 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-dv3jc.gb1.brightbox.com\" not found" Jan 17 01:31:12.612475 kubelet[2681]: I0117 01:31:12.612442 2681 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 01:31:12.627405 kubelet[2681]: I0117 01:31:12.614545 2681 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 01:31:12.634231 kubelet[2681]: I0117 01:31:12.620659 2681 reconciler.go:26] "Reconciler: start to sync state" Jan 17 01:31:12.634231 kubelet[2681]: I0117 01:31:12.620808 2681 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 01:31:12.636254 kubelet[2681]: I0117 01:31:12.629237 2681 factory.go:221] Registration of the systemd container factory successfully Jan 17 01:31:12.639445 kubelet[2681]: I0117 01:31:12.639241 2681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 01:31:12.640309 kubelet[2681]: E0117 01:31:12.631662 2681 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 01:31:12.650644 kubelet[2681]: I0117 01:31:12.650565 2681 factory.go:221] Registration of the containerd container factory successfully Jan 17 01:31:12.664149 kubelet[2681]: I0117 01:31:12.664046 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 01:31:12.669486 kubelet[2681]: I0117 01:31:12.669452 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 01:31:12.670605 kubelet[2681]: I0117 01:31:12.670443 2681 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 01:31:12.670605 kubelet[2681]: I0117 01:31:12.670508 2681 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 01:31:12.670605 kubelet[2681]: I0117 01:31:12.670528 2681 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 01:31:12.670816 kubelet[2681]: E0117 01:31:12.670605 2681 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 01:31:12.737772 kubelet[2681]: I0117 01:31:12.737732 2681 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 01:31:12.737772 kubelet[2681]: I0117 01:31:12.737758 2681 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 01:31:12.737772 kubelet[2681]: I0117 01:31:12.737784 2681 state_mem.go:36] "Initialized new in-memory state store" Jan 17 01:31:12.738055 kubelet[2681]: I0117 01:31:12.738012 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 01:31:12.738055 kubelet[2681]: I0117 01:31:12.738031 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 01:31:12.738182 kubelet[2681]: I0117 01:31:12.738068 2681 policy_none.go:49] "None policy: Start" Jan 17 01:31:12.738182 kubelet[2681]: I0117 01:31:12.738096 2681 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 01:31:12.738399 kubelet[2681]: I0117 01:31:12.738373 2681 state_mem.go:35] "Initializing new in-memory state store" Jan 17 01:31:12.739754 kubelet[2681]: I0117 01:31:12.738575 2681 state_mem.go:75] "Updated machine memory state" Jan 17 01:31:12.757020 kubelet[2681]: I0117 01:31:12.756486 2681 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 01:31:12.757020 kubelet[2681]: I0117 01:31:12.756779 2681 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 01:31:12.757020 kubelet[2681]: I0117 01:31:12.756807 2681 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 01:31:12.758456 kubelet[2681]: I0117 01:31:12.758376 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 01:31:12.761344 kubelet[2681]: E0117 01:31:12.761319 2681 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 01:31:12.781392 kubelet[2681]: I0117 01:31:12.781356 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.784171 kubelet[2681]: I0117 01:31:12.783906 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.785690 kubelet[2681]: I0117 01:31:12.785347 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.801265 kubelet[2681]: W0117 01:31:12.801235 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 01:31:12.803800 kubelet[2681]: W0117 01:31:12.803333 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 01:31:12.809404 kubelet[2681]: W0117 01:31:12.809317 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 01:31:12.835852 kubelet[2681]: I0117 01:31:12.835464 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/719daed51f2353bcb75928d00f132bde-flexvolume-dir\") pod \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" (UID: \"719daed51f2353bcb75928d00f132bde\") " pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.835852 kubelet[2681]: I0117 01:31:12.835530 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/719daed51f2353bcb75928d00f132bde-k8s-certs\") pod \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" (UID: \"719daed51f2353bcb75928d00f132bde\") " pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.835852 kubelet[2681]: I0117 01:31:12.835575 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9735906e9fc172029b74bec2ef80daf9-kubeconfig\") pod \"kube-scheduler-srv-dv3jc.gb1.brightbox.com\" (UID: \"9735906e9fc172029b74bec2ef80daf9\") " pod="kube-system/kube-scheduler-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.835852 kubelet[2681]: I0117 01:31:12.835604 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7bf2fd6440a942c12fb1a154cb8f83b-ca-certs\") pod \"kube-apiserver-srv-dv3jc.gb1.brightbox.com\" (UID: \"c7bf2fd6440a942c12fb1a154cb8f83b\") " pod="kube-system/kube-apiserver-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.835852 kubelet[2681]: I0117 01:31:12.835634 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/719daed51f2353bcb75928d00f132bde-ca-certs\") pod \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" (UID: \"719daed51f2353bcb75928d00f132bde\") " pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.836252 kubelet[2681]: I0117 01:31:12.835662 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/719daed51f2353bcb75928d00f132bde-kubeconfig\") pod \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" (UID: \"719daed51f2353bcb75928d00f132bde\") " pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.836252 kubelet[2681]: I0117 01:31:12.835690 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/719daed51f2353bcb75928d00f132bde-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-dv3jc.gb1.brightbox.com\" (UID: \"719daed51f2353bcb75928d00f132bde\") " pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.836252 kubelet[2681]: I0117 01:31:12.835717 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7bf2fd6440a942c12fb1a154cb8f83b-k8s-certs\") pod \"kube-apiserver-srv-dv3jc.gb1.brightbox.com\" (UID: \"c7bf2fd6440a942c12fb1a154cb8f83b\") " pod="kube-system/kube-apiserver-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.836252 kubelet[2681]: I0117 01:31:12.835748 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7bf2fd6440a942c12fb1a154cb8f83b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-dv3jc.gb1.brightbox.com\" (UID: \"c7bf2fd6440a942c12fb1a154cb8f83b\") " pod="kube-system/kube-apiserver-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.891898 kubelet[2681]: I0117 01:31:12.890642 2681 kubelet_node_status.go:75] "Attempting to register node" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.903908 kubelet[2681]: I0117 01:31:12.903533 2681 kubelet_node_status.go:124] "Node was previously registered" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:12.903908 kubelet[2681]: I0117 01:31:12.903634 2681 kubelet_node_status.go:78] "Successfully registered node" node="srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:13.555673 kubelet[2681]: I0117 01:31:13.555166 2681 apiserver.go:52] "Watching apiserver" Jan 17 01:31:13.634799 kubelet[2681]: I0117 01:31:13.634720 2681 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 01:31:13.704030 kubelet[2681]: I0117 01:31:13.701668 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:13.717196 kubelet[2681]: W0117 01:31:13.717159 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 01:31:13.717196 kubelet[2681]: E0117 01:31:13.717228 2681 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-dv3jc.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-dv3jc.gb1.brightbox.com" Jan 17 01:31:13.739406 kubelet[2681]: I0117 01:31:13.739090 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-dv3jc.gb1.brightbox.com" podStartSLOduration=1.7384919170000002 podStartE2EDuration="1.738491917s" podCreationTimestamp="2026-01-17 01:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:31:13.737676959 +0000 UTC m=+1.325794162" watchObservedRunningTime="2026-01-17 01:31:13.738491917 +0000 UTC m=+1.326609113" Jan 17 01:31:13.768542 kubelet[2681]: I0117 01:31:13.767963 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-dv3jc.gb1.brightbox.com" podStartSLOduration=1.767946084 podStartE2EDuration="1.767946084s" podCreationTimestamp="2026-01-17 01:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:31:13.752742443 +0000 UTC m=+1.340859667" watchObservedRunningTime="2026-01-17 01:31:13.767946084 +0000 UTC m=+1.356063287" Jan 17 01:31:13.786671 kubelet[2681]: I0117 01:31:13.786044 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-dv3jc.gb1.brightbox.com" podStartSLOduration=1.7860264460000002 podStartE2EDuration="1.786026446s" podCreationTimestamp="2026-01-17 01:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:31:13.768296228 +0000 UTC m=+1.356413445" watchObservedRunningTime="2026-01-17 01:31:13.786026446 +0000 UTC m=+1.374143641" Jan 17 01:31:15.977770 kubelet[2681]: I0117 01:31:15.977542 2681 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 01:31:15.979177 containerd[1507]: time="2026-01-17T01:31:15.978891767Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 01:31:15.979834 kubelet[2681]: I0117 01:31:15.979242 2681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 01:31:16.727773 kubelet[2681]: I0117 01:31:16.727711 2681 status_manager.go:890] "Failed to get status for pod" podUID="d0751033-11a5-4dd6-a461-bac9f947d582" pod="kube-system/kube-proxy-ztrbh" err="pods \"kube-proxy-ztrbh\" is forbidden: User \"system:node:srv-dv3jc.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-dv3jc.gb1.brightbox.com' and this object" Jan 17 01:31:16.737717 systemd[1]: Created slice kubepods-besteffort-podd0751033_11a5_4dd6_a461_bac9f947d582.slice - libcontainer container kubepods-besteffort-podd0751033_11a5_4dd6_a461_bac9f947d582.slice. Jan 17 01:31:16.761974 kubelet[2681]: I0117 01:31:16.761922 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0751033-11a5-4dd6-a461-bac9f947d582-lib-modules\") pod \"kube-proxy-ztrbh\" (UID: \"d0751033-11a5-4dd6-a461-bac9f947d582\") " pod="kube-system/kube-proxy-ztrbh" Jan 17 01:31:16.762188 kubelet[2681]: I0117 01:31:16.761981 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drp25\" (UniqueName: \"kubernetes.io/projected/d0751033-11a5-4dd6-a461-bac9f947d582-kube-api-access-drp25\") pod \"kube-proxy-ztrbh\" (UID: \"d0751033-11a5-4dd6-a461-bac9f947d582\") " pod="kube-system/kube-proxy-ztrbh" Jan 17 01:31:16.762188 kubelet[2681]: I0117 01:31:16.762016 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0751033-11a5-4dd6-a461-bac9f947d582-kube-proxy\") pod \"kube-proxy-ztrbh\" (UID: \"d0751033-11a5-4dd6-a461-bac9f947d582\") " pod="kube-system/kube-proxy-ztrbh" Jan 17 01:31:16.762188 kubelet[2681]: I0117 01:31:16.762044 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0751033-11a5-4dd6-a461-bac9f947d582-xtables-lock\") pod \"kube-proxy-ztrbh\" (UID: \"d0751033-11a5-4dd6-a461-bac9f947d582\") " pod="kube-system/kube-proxy-ztrbh" Jan 17 01:31:17.048267 containerd[1507]: time="2026-01-17T01:31:17.048060650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztrbh,Uid:d0751033-11a5-4dd6-a461-bac9f947d582,Namespace:kube-system,Attempt:0,}" Jan 17 01:31:17.118327 containerd[1507]: time="2026-01-17T01:31:17.115782160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:31:17.118327 containerd[1507]: time="2026-01-17T01:31:17.116393048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:31:17.118327 containerd[1507]: time="2026-01-17T01:31:17.116475215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:17.118865 containerd[1507]: time="2026-01-17T01:31:17.118277095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:17.150151 kubelet[2681]: I0117 01:31:17.149022 2681 status_manager.go:890] "Failed to get status for pod" podUID="21ddaecc-8202-455c-bd6a-7971da5002af" pod="tigera-operator/tigera-operator-7dcd859c48-66wvp" err="pods \"tigera-operator-7dcd859c48-66wvp\" is forbidden: User \"system:node:srv-dv3jc.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-dv3jc.gb1.brightbox.com' and this object" Jan 17 01:31:17.149759 systemd[1]: Created slice kubepods-besteffort-pod21ddaecc_8202_455c_bd6a_7971da5002af.slice - libcontainer container kubepods-besteffort-pod21ddaecc_8202_455c_bd6a_7971da5002af.slice. Jan 17 01:31:17.170037 kubelet[2681]: I0117 01:31:17.169989 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd2jq\" (UniqueName: \"kubernetes.io/projected/21ddaecc-8202-455c-bd6a-7971da5002af-kube-api-access-bd2jq\") pod \"tigera-operator-7dcd859c48-66wvp\" (UID: \"21ddaecc-8202-455c-bd6a-7971da5002af\") " pod="tigera-operator/tigera-operator-7dcd859c48-66wvp" Jan 17 01:31:17.171152 kubelet[2681]: I0117 01:31:17.170297 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/21ddaecc-8202-455c-bd6a-7971da5002af-var-lib-calico\") pod \"tigera-operator-7dcd859c48-66wvp\" (UID: \"21ddaecc-8202-455c-bd6a-7971da5002af\") " pod="tigera-operator/tigera-operator-7dcd859c48-66wvp" Jan 17 01:31:17.175842 systemd[1]: run-containerd-runc-k8s.io-c1e9356b04ba4baf1fd69a2ec88241bf7d4d606a0375fc95fa41b9c6e1964f5e-runc.SSO97h.mount: Deactivated successfully. Jan 17 01:31:17.191546 systemd[1]: Started cri-containerd-c1e9356b04ba4baf1fd69a2ec88241bf7d4d606a0375fc95fa41b9c6e1964f5e.scope - libcontainer container c1e9356b04ba4baf1fd69a2ec88241bf7d4d606a0375fc95fa41b9c6e1964f5e. Jan 17 01:31:17.242674 containerd[1507]: time="2026-01-17T01:31:17.242598535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztrbh,Uid:d0751033-11a5-4dd6-a461-bac9f947d582,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1e9356b04ba4baf1fd69a2ec88241bf7d4d606a0375fc95fa41b9c6e1964f5e\"" Jan 17 01:31:17.251325 containerd[1507]: time="2026-01-17T01:31:17.251158832Z" level=info msg="CreateContainer within sandbox \"c1e9356b04ba4baf1fd69a2ec88241bf7d4d606a0375fc95fa41b9c6e1964f5e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 01:31:17.279663 containerd[1507]: time="2026-01-17T01:31:17.279594176Z" level=info msg="CreateContainer within sandbox \"c1e9356b04ba4baf1fd69a2ec88241bf7d4d606a0375fc95fa41b9c6e1964f5e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fa65d52e930543aed83032877da680779282c64da7fa0139edaad73f05b1464c\"" Jan 17 01:31:17.280852 containerd[1507]: time="2026-01-17T01:31:17.280716269Z" level=info msg="StartContainer for \"fa65d52e930543aed83032877da680779282c64da7fa0139edaad73f05b1464c\"" Jan 17 01:31:17.331380 systemd[1]: Started cri-containerd-fa65d52e930543aed83032877da680779282c64da7fa0139edaad73f05b1464c.scope - libcontainer container fa65d52e930543aed83032877da680779282c64da7fa0139edaad73f05b1464c. Jan 17 01:31:17.379091 containerd[1507]: time="2026-01-17T01:31:17.379020053Z" level=info msg="StartContainer for \"fa65d52e930543aed83032877da680779282c64da7fa0139edaad73f05b1464c\" returns successfully" Jan 17 01:31:17.457961 containerd[1507]: time="2026-01-17T01:31:17.457861158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-66wvp,Uid:21ddaecc-8202-455c-bd6a-7971da5002af,Namespace:tigera-operator,Attempt:0,}" Jan 17 01:31:17.511685 containerd[1507]: time="2026-01-17T01:31:17.511306565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:31:17.511685 containerd[1507]: time="2026-01-17T01:31:17.511393583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:31:17.511685 containerd[1507]: time="2026-01-17T01:31:17.511411538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:17.511685 containerd[1507]: time="2026-01-17T01:31:17.511577645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:17.557449 systemd[1]: Started cri-containerd-0c7838fda24efad7a3bedbe7e06a57696aa84d07e306eac8591832ef0e9a7af5.scope - libcontainer container 0c7838fda24efad7a3bedbe7e06a57696aa84d07e306eac8591832ef0e9a7af5. Jan 17 01:31:17.641773 containerd[1507]: time="2026-01-17T01:31:17.641533120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-66wvp,Uid:21ddaecc-8202-455c-bd6a-7971da5002af,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0c7838fda24efad7a3bedbe7e06a57696aa84d07e306eac8591832ef0e9a7af5\"" Jan 17 01:31:17.646521 containerd[1507]: time="2026-01-17T01:31:17.646284116Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 01:31:17.738923 kubelet[2681]: I0117 01:31:17.738841 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ztrbh" podStartSLOduration=1.738822477 podStartE2EDuration="1.738822477s" podCreationTimestamp="2026-01-17 01:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:31:17.728917399 +0000 UTC m=+5.317034605" watchObservedRunningTime="2026-01-17 01:31:17.738822477 +0000 UTC m=+5.326939679" Jan 17 01:31:19.715346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1746487865.mount: Deactivated successfully. Jan 17 01:31:21.181237 containerd[1507]: time="2026-01-17T01:31:21.181032388Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:21.182448 containerd[1507]: time="2026-01-17T01:31:21.182201071Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 01:31:21.202871 containerd[1507]: time="2026-01-17T01:31:21.202811033Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:21.210054 containerd[1507]: time="2026-01-17T01:31:21.208568238Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:21.210054 containerd[1507]: time="2026-01-17T01:31:21.209812837Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.563460105s" Jan 17 01:31:21.210054 containerd[1507]: time="2026-01-17T01:31:21.209887559Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 01:31:21.214365 containerd[1507]: time="2026-01-17T01:31:21.214326680Z" level=info msg="CreateContainer within sandbox \"0c7838fda24efad7a3bedbe7e06a57696aa84d07e306eac8591832ef0e9a7af5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 01:31:21.238085 containerd[1507]: time="2026-01-17T01:31:21.237869312Z" level=info msg="CreateContainer within sandbox \"0c7838fda24efad7a3bedbe7e06a57696aa84d07e306eac8591832ef0e9a7af5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eba5bdbfa23bb9b43cd9115177c97e7d8557075e65f7c202671e46735a5603c0\"" Jan 17 01:31:21.240576 containerd[1507]: time="2026-01-17T01:31:21.240542228Z" level=info msg="StartContainer for \"eba5bdbfa23bb9b43cd9115177c97e7d8557075e65f7c202671e46735a5603c0\"" Jan 17 01:31:21.310354 systemd[1]: Started cri-containerd-eba5bdbfa23bb9b43cd9115177c97e7d8557075e65f7c202671e46735a5603c0.scope - libcontainer container eba5bdbfa23bb9b43cd9115177c97e7d8557075e65f7c202671e46735a5603c0. Jan 17 01:31:21.359456 containerd[1507]: time="2026-01-17T01:31:21.358858398Z" level=info msg="StartContainer for \"eba5bdbfa23bb9b43cd9115177c97e7d8557075e65f7c202671e46735a5603c0\" returns successfully" Jan 17 01:31:22.245360 kubelet[2681]: I0117 01:31:22.245074 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-66wvp" podStartSLOduration=1.677143027 podStartE2EDuration="5.245035204s" podCreationTimestamp="2026-01-17 01:31:17 +0000 UTC" firstStartedPulling="2026-01-17 01:31:17.643852452 +0000 UTC m=+5.231969644" lastFinishedPulling="2026-01-17 01:31:21.211744626 +0000 UTC m=+8.799861821" observedRunningTime="2026-01-17 01:31:21.741908408 +0000 UTC m=+9.330025607" watchObservedRunningTime="2026-01-17 01:31:22.245035204 +0000 UTC m=+9.833152400" Jan 17 01:31:29.305598 sudo[1762]: pam_unix(sudo:session): session closed for user root Jan 17 01:31:29.403617 sshd[1759]: pam_unix(sshd:session): session closed for user core Jan 17 01:31:29.410887 systemd[1]: sshd@8-10.243.73.142:22-20.161.92.111:56052.service: Deactivated successfully. Jan 17 01:31:29.415348 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 01:31:29.416097 systemd[1]: session-11.scope: Consumed 7.113s CPU time, 141.3M memory peak, 0B memory swap peak. Jan 17 01:31:29.418219 systemd-logind[1489]: Session 11 logged out. Waiting for processes to exit. Jan 17 01:31:29.423583 systemd-logind[1489]: Removed session 11. Jan 17 01:31:36.452991 systemd[1]: Created slice kubepods-besteffort-pod9f93bec5_621d_4ca0_ba30_87d38389e277.slice - libcontainer container kubepods-besteffort-pod9f93bec5_621d_4ca0_ba30_87d38389e277.slice. Jan 17 01:31:36.519354 kubelet[2681]: I0117 01:31:36.518978 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrrf\" (UniqueName: \"kubernetes.io/projected/9f93bec5-621d-4ca0-ba30-87d38389e277-kube-api-access-dqrrf\") pod \"calico-typha-77cffb547c-5kd8w\" (UID: \"9f93bec5-621d-4ca0-ba30-87d38389e277\") " pod="calico-system/calico-typha-77cffb547c-5kd8w" Jan 17 01:31:36.519354 kubelet[2681]: I0117 01:31:36.519129 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f93bec5-621d-4ca0-ba30-87d38389e277-tigera-ca-bundle\") pod \"calico-typha-77cffb547c-5kd8w\" (UID: \"9f93bec5-621d-4ca0-ba30-87d38389e277\") " pod="calico-system/calico-typha-77cffb547c-5kd8w" Jan 17 01:31:36.519354 kubelet[2681]: I0117 01:31:36.519191 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f93bec5-621d-4ca0-ba30-87d38389e277-typha-certs\") pod \"calico-typha-77cffb547c-5kd8w\" (UID: \"9f93bec5-621d-4ca0-ba30-87d38389e277\") " pod="calico-system/calico-typha-77cffb547c-5kd8w" Jan 17 01:31:36.775907 containerd[1507]: time="2026-01-17T01:31:36.775498402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77cffb547c-5kd8w,Uid:9f93bec5-621d-4ca0-ba30-87d38389e277,Namespace:calico-system,Attempt:0,}" Jan 17 01:31:36.817454 systemd[1]: Created slice kubepods-besteffort-pod0cfe16ee_24fd_4e7c_8d23_706206f4247a.slice - libcontainer container kubepods-besteffort-pod0cfe16ee_24fd_4e7c_8d23_706206f4247a.slice. Jan 17 01:31:36.870098 containerd[1507]: time="2026-01-17T01:31:36.869923917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:31:36.871390 containerd[1507]: time="2026-01-17T01:31:36.870069842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:31:36.871390 containerd[1507]: time="2026-01-17T01:31:36.871241602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:36.871733 containerd[1507]: time="2026-01-17T01:31:36.871659018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:36.929419 kubelet[2681]: I0117 01:31:36.926089 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxz42\" (UniqueName: \"kubernetes.io/projected/0cfe16ee-24fd-4e7c-8d23-706206f4247a-kube-api-access-zxz42\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.938214 kubelet[2681]: I0117 01:31:36.938169 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0cfe16ee-24fd-4e7c-8d23-706206f4247a-cni-net-dir\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.940748 kubelet[2681]: I0117 01:31:36.939201 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cfe16ee-24fd-4e7c-8d23-706206f4247a-tigera-ca-bundle\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.940748 kubelet[2681]: I0117 01:31:36.940310 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0cfe16ee-24fd-4e7c-8d23-706206f4247a-flexvol-driver-host\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.940748 kubelet[2681]: I0117 01:31:36.940358 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0cfe16ee-24fd-4e7c-8d23-706206f4247a-cni-bin-dir\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.940748 kubelet[2681]: I0117 01:31:36.940391 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0cfe16ee-24fd-4e7c-8d23-706206f4247a-var-lib-calico\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.940748 kubelet[2681]: I0117 01:31:36.940445 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0cfe16ee-24fd-4e7c-8d23-706206f4247a-cni-log-dir\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.941053 kubelet[2681]: I0117 01:31:36.940512 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0cfe16ee-24fd-4e7c-8d23-706206f4247a-node-certs\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.941053 kubelet[2681]: I0117 01:31:36.940537 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cfe16ee-24fd-4e7c-8d23-706206f4247a-lib-modules\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.941053 kubelet[2681]: I0117 01:31:36.940571 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0cfe16ee-24fd-4e7c-8d23-706206f4247a-policysync\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.941053 kubelet[2681]: I0117 01:31:36.940598 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0cfe16ee-24fd-4e7c-8d23-706206f4247a-var-run-calico\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.941053 kubelet[2681]: I0117 01:31:36.940634 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cfe16ee-24fd-4e7c-8d23-706206f4247a-xtables-lock\") pod \"calico-node-5p5ts\" (UID: \"0cfe16ee-24fd-4e7c-8d23-706206f4247a\") " pod="calico-system/calico-node-5p5ts" Jan 17 01:31:36.969435 systemd[1]: Started cri-containerd-1e81c754c2c8501ce0a34f8be0f110ee34057f47be3df6778ae141f0f386a1f8.scope - libcontainer container 1e81c754c2c8501ce0a34f8be0f110ee34057f47be3df6778ae141f0f386a1f8. Jan 17 01:31:37.012333 kubelet[2681]: E0117 01:31:37.011869 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:31:37.043756 kubelet[2681]: I0117 01:31:37.043596 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbnl\" (UniqueName: \"kubernetes.io/projected/c16de311-2d09-4fff-8444-304a8ff3b2b5-kube-api-access-qfbnl\") pod \"csi-node-driver-crszt\" (UID: \"c16de311-2d09-4fff-8444-304a8ff3b2b5\") " pod="calico-system/csi-node-driver-crszt" Jan 17 01:31:37.044539 kubelet[2681]: I0117 01:31:37.044514 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c16de311-2d09-4fff-8444-304a8ff3b2b5-kubelet-dir\") pod \"csi-node-driver-crszt\" (UID: \"c16de311-2d09-4fff-8444-304a8ff3b2b5\") " pod="calico-system/csi-node-driver-crszt" Jan 17 01:31:37.044665 kubelet[2681]: I0117 01:31:37.044641 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c16de311-2d09-4fff-8444-304a8ff3b2b5-registration-dir\") pod \"csi-node-driver-crszt\" (UID: \"c16de311-2d09-4fff-8444-304a8ff3b2b5\") " pod="calico-system/csi-node-driver-crszt" Jan 17 01:31:37.046217 kubelet[2681]: I0117 01:31:37.046190 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c16de311-2d09-4fff-8444-304a8ff3b2b5-socket-dir\") pod \"csi-node-driver-crszt\" (UID: \"c16de311-2d09-4fff-8444-304a8ff3b2b5\") " pod="calico-system/csi-node-driver-crszt" Jan 17 01:31:37.046931 kubelet[2681]: I0117 01:31:37.046483 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c16de311-2d09-4fff-8444-304a8ff3b2b5-varrun\") pod \"csi-node-driver-crszt\" (UID: \"c16de311-2d09-4fff-8444-304a8ff3b2b5\") " pod="calico-system/csi-node-driver-crszt" Jan 17 01:31:37.067078 kubelet[2681]: E0117 01:31:37.066526 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.067078 kubelet[2681]: W0117 01:31:37.066583 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.068156 kubelet[2681]: E0117 01:31:37.067830 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.082275 kubelet[2681]: E0117 01:31:37.082233 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.082509 kubelet[2681]: W0117 01:31:37.082477 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.082638 kubelet[2681]: E0117 01:31:37.082615 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.137474 containerd[1507]: time="2026-01-17T01:31:37.136169590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5p5ts,Uid:0cfe16ee-24fd-4e7c-8d23-706206f4247a,Namespace:calico-system,Attempt:0,}" Jan 17 01:31:37.150328 kubelet[2681]: E0117 01:31:37.150285 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.150648 kubelet[2681]: W0117 01:31:37.150375 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.150648 kubelet[2681]: E0117 01:31:37.150571 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.153132 kubelet[2681]: E0117 01:31:37.152764 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.153132 kubelet[2681]: W0117 01:31:37.153129 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.153274 kubelet[2681]: E0117 01:31:37.153161 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.153612 kubelet[2681]: E0117 01:31:37.153566 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.153612 kubelet[2681]: W0117 01:31:37.153581 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.154064 kubelet[2681]: E0117 01:31:37.153645 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.154962 kubelet[2681]: E0117 01:31:37.154934 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.154962 kubelet[2681]: W0117 01:31:37.154958 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.155246 kubelet[2681]: E0117 01:31:37.155047 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.156029 kubelet[2681]: E0117 01:31:37.156006 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.156202 kubelet[2681]: W0117 01:31:37.156143 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.157193 kubelet[2681]: E0117 01:31:37.156316 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.158287 kubelet[2681]: E0117 01:31:37.158103 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.158287 kubelet[2681]: W0117 01:31:37.158151 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.158287 kubelet[2681]: E0117 01:31:37.158191 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.159042 kubelet[2681]: E0117 01:31:37.158561 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.159042 kubelet[2681]: W0117 01:31:37.158581 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.159596 kubelet[2681]: E0117 01:31:37.159491 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.160487 kubelet[2681]: E0117 01:31:37.160018 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.160487 kubelet[2681]: W0117 01:31:37.160036 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.160487 kubelet[2681]: E0117 01:31:37.160079 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.161305 kubelet[2681]: E0117 01:31:37.161285 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.161849 kubelet[2681]: W0117 01:31:37.161617 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.161849 kubelet[2681]: E0117 01:31:37.161670 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.163237 containerd[1507]: time="2026-01-17T01:31:37.163017400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77cffb547c-5kd8w,Uid:9f93bec5-621d-4ca0-ba30-87d38389e277,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e81c754c2c8501ce0a34f8be0f110ee34057f47be3df6778ae141f0f386a1f8\"" Jan 17 01:31:37.163527 kubelet[2681]: E0117 01:31:37.163378 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.163527 kubelet[2681]: W0117 01:31:37.163398 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.164069 kubelet[2681]: E0117 01:31:37.164035 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.164305 kubelet[2681]: W0117 01:31:37.164146 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.165933 kubelet[2681]: E0117 01:31:37.164788 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.165933 kubelet[2681]: W0117 01:31:37.164817 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.165933 kubelet[2681]: E0117 01:31:37.164964 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.165933 kubelet[2681]: E0117 01:31:37.165059 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.165933 kubelet[2681]: E0117 01:31:37.165082 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.166555 kubelet[2681]: E0117 01:31:37.166418 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.166555 kubelet[2681]: W0117 01:31:37.166437 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.167279 kubelet[2681]: E0117 01:31:37.167011 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.167279 kubelet[2681]: W0117 01:31:37.167032 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.167766 kubelet[2681]: E0117 01:31:37.167601 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.167766 kubelet[2681]: W0117 01:31:37.167631 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.168280 kubelet[2681]: E0117 01:31:37.168261 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.168549 kubelet[2681]: W0117 01:31:37.168422 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.169193 kubelet[2681]: E0117 01:31:37.168871 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.169193 kubelet[2681]: W0117 01:31:37.168898 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.169193 kubelet[2681]: E0117 01:31:37.168916 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.169818 kubelet[2681]: E0117 01:31:37.169569 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.169818 kubelet[2681]: W0117 01:31:37.169587 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.169818 kubelet[2681]: E0117 01:31:37.169603 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.169818 kubelet[2681]: E0117 01:31:37.169637 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.170255 kubelet[2681]: E0117 01:31:37.170235 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.171484 kubelet[2681]: W0117 01:31:37.170342 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.171484 kubelet[2681]: E0117 01:31:37.170368 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.171484 kubelet[2681]: E0117 01:31:37.170695 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.171856 kubelet[2681]: E0117 01:31:37.171835 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.171956 kubelet[2681]: W0117 01:31:37.171936 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.172052 kubelet[2681]: E0117 01:31:37.172033 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.172189 kubelet[2681]: E0117 01:31:37.172167 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.172599 kubelet[2681]: E0117 01:31:37.172555 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.172747 kubelet[2681]: W0117 01:31:37.172725 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.173006 kubelet[2681]: E0117 01:31:37.172868 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.173490 kubelet[2681]: E0117 01:31:37.173471 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.173777 kubelet[2681]: W0117 01:31:37.173575 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.173777 kubelet[2681]: E0117 01:31:37.173599 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.174538 kubelet[2681]: E0117 01:31:37.174424 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.174538 kubelet[2681]: W0117 01:31:37.174469 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.174927 kubelet[2681]: E0117 01:31:37.174613 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.174927 kubelet[2681]: E0117 01:31:37.174643 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.176296 kubelet[2681]: E0117 01:31:37.176081 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.176296 kubelet[2681]: W0117 01:31:37.176101 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.176296 kubelet[2681]: E0117 01:31:37.176129 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.177761 kubelet[2681]: E0117 01:31:37.177718 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.178027 kubelet[2681]: W0117 01:31:37.177937 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.178027 kubelet[2681]: E0117 01:31:37.177966 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.194168 containerd[1507]: time="2026-01-17T01:31:37.194008271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 01:31:37.198351 kubelet[2681]: E0117 01:31:37.198316 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:37.198351 kubelet[2681]: W0117 01:31:37.198343 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:37.198467 kubelet[2681]: E0117 01:31:37.198373 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:37.218756 containerd[1507]: time="2026-01-17T01:31:37.218432658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:31:37.219243 containerd[1507]: time="2026-01-17T01:31:37.218584255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:31:37.219243 containerd[1507]: time="2026-01-17T01:31:37.218614233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:37.219243 containerd[1507]: time="2026-01-17T01:31:37.218837391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:31:37.256340 systemd[1]: Started cri-containerd-d456d3daacdbc4138901ca9e5d3bfb4fbfc02a6236e444b9f8661c676dee854d.scope - libcontainer container d456d3daacdbc4138901ca9e5d3bfb4fbfc02a6236e444b9f8661c676dee854d. Jan 17 01:31:37.315101 containerd[1507]: time="2026-01-17T01:31:37.314526943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5p5ts,Uid:0cfe16ee-24fd-4e7c-8d23-706206f4247a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d456d3daacdbc4138901ca9e5d3bfb4fbfc02a6236e444b9f8661c676dee854d\"" Jan 17 01:31:38.673136 kubelet[2681]: E0117 01:31:38.672129 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:31:38.880504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1474869449.mount: Deactivated successfully. Jan 17 01:31:40.679252 kubelet[2681]: E0117 01:31:40.679187 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:31:40.702856 containerd[1507]: time="2026-01-17T01:31:40.702189696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:40.703773 containerd[1507]: time="2026-01-17T01:31:40.703722205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 01:31:40.704191 containerd[1507]: time="2026-01-17T01:31:40.704158389Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:40.707038 containerd[1507]: time="2026-01-17T01:31:40.706999276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:40.708387 containerd[1507]: time="2026-01-17T01:31:40.708352114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.514290212s" Jan 17 01:31:40.708673 containerd[1507]: time="2026-01-17T01:31:40.708519496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 01:31:40.719954 containerd[1507]: time="2026-01-17T01:31:40.719875705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 01:31:40.776917 containerd[1507]: time="2026-01-17T01:31:40.776547844Z" level=info msg="CreateContainer within sandbox \"1e81c754c2c8501ce0a34f8be0f110ee34057f47be3df6778ae141f0f386a1f8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 01:31:40.821961 containerd[1507]: time="2026-01-17T01:31:40.821904926Z" level=info msg="CreateContainer within sandbox \"1e81c754c2c8501ce0a34f8be0f110ee34057f47be3df6778ae141f0f386a1f8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"976150ae4f1f2d7902174278d7712d422f7a302d0fae3ead2648a81f422f0cb0\"" Jan 17 01:31:40.841438 containerd[1507]: time="2026-01-17T01:31:40.841304803Z" level=info msg="StartContainer for \"976150ae4f1f2d7902174278d7712d422f7a302d0fae3ead2648a81f422f0cb0\"" Jan 17 01:31:40.912391 systemd[1]: Started cri-containerd-976150ae4f1f2d7902174278d7712d422f7a302d0fae3ead2648a81f422f0cb0.scope - libcontainer container 976150ae4f1f2d7902174278d7712d422f7a302d0fae3ead2648a81f422f0cb0. Jan 17 01:31:40.984416 containerd[1507]: time="2026-01-17T01:31:40.983400127Z" level=info msg="StartContainer for \"976150ae4f1f2d7902174278d7712d422f7a302d0fae3ead2648a81f422f0cb0\" returns successfully" Jan 17 01:31:41.878822 kubelet[2681]: E0117 01:31:41.878032 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.878822 kubelet[2681]: W0117 01:31:41.878090 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.878822 kubelet[2681]: E0117 01:31:41.878136 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.878822 kubelet[2681]: E0117 01:31:41.878465 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.878822 kubelet[2681]: W0117 01:31:41.878478 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.878822 kubelet[2681]: E0117 01:31:41.878505 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.878822 kubelet[2681]: E0117 01:31:41.878812 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.878822 kubelet[2681]: W0117 01:31:41.878827 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.878822 kubelet[2681]: E0117 01:31:41.878842 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.880314 kubelet[2681]: E0117 01:31:41.879204 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.880314 kubelet[2681]: W0117 01:31:41.879281 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.880314 kubelet[2681]: E0117 01:31:41.879298 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.880314 kubelet[2681]: E0117 01:31:41.879622 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.880314 kubelet[2681]: W0117 01:31:41.879635 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.880314 kubelet[2681]: E0117 01:31:41.879650 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.880314 kubelet[2681]: E0117 01:31:41.879874 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.880314 kubelet[2681]: W0117 01:31:41.879887 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.880314 kubelet[2681]: E0117 01:31:41.879901 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.880314 kubelet[2681]: E0117 01:31:41.880183 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.881159 kubelet[2681]: W0117 01:31:41.880198 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.881159 kubelet[2681]: E0117 01:31:41.880213 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.881663 kubelet[2681]: E0117 01:31:41.881165 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.881663 kubelet[2681]: W0117 01:31:41.881180 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.881663 kubelet[2681]: E0117 01:31:41.881196 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.881948 kubelet[2681]: E0117 01:31:41.881823 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.881948 kubelet[2681]: W0117 01:31:41.881840 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.881948 kubelet[2681]: E0117 01:31:41.881856 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.882679 kubelet[2681]: E0117 01:31:41.882645 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.882679 kubelet[2681]: W0117 01:31:41.882668 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.882791 kubelet[2681]: E0117 01:31:41.882688 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.887193 kubelet[2681]: E0117 01:31:41.886733 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.887193 kubelet[2681]: W0117 01:31:41.886767 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.887193 kubelet[2681]: E0117 01:31:41.886800 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.888508 kubelet[2681]: I0117 01:31:41.888181 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77cffb547c-5kd8w" podStartSLOduration=2.36084839 podStartE2EDuration="5.887292674s" podCreationTimestamp="2026-01-17 01:31:36 +0000 UTC" firstStartedPulling="2026-01-17 01:31:37.19302417 +0000 UTC m=+24.781141359" lastFinishedPulling="2026-01-17 01:31:40.719468436 +0000 UTC m=+28.307585643" observedRunningTime="2026-01-17 01:31:41.884715774 +0000 UTC m=+29.472833002" watchObservedRunningTime="2026-01-17 01:31:41.887292674 +0000 UTC m=+29.475409883" Jan 17 01:31:41.889400 kubelet[2681]: E0117 01:31:41.889012 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.889400 kubelet[2681]: W0117 01:31:41.889043 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.889400 kubelet[2681]: E0117 01:31:41.889072 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.890729 kubelet[2681]: E0117 01:31:41.890312 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.890729 kubelet[2681]: W0117 01:31:41.890342 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.890729 kubelet[2681]: E0117 01:31:41.890358 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.891943 kubelet[2681]: E0117 01:31:41.891400 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.891943 kubelet[2681]: W0117 01:31:41.891419 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.891943 kubelet[2681]: E0117 01:31:41.891448 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.892951 kubelet[2681]: E0117 01:31:41.892870 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.893247 kubelet[2681]: W0117 01:31:41.893070 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.893247 kubelet[2681]: E0117 01:31:41.893148 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.898538 kubelet[2681]: E0117 01:31:41.898372 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.898538 kubelet[2681]: W0117 01:31:41.898393 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.898538 kubelet[2681]: E0117 01:31:41.898423 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.899597 kubelet[2681]: E0117 01:31:41.899531 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.899597 kubelet[2681]: W0117 01:31:41.899567 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.899597 kubelet[2681]: E0117 01:31:41.899594 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.901352 kubelet[2681]: E0117 01:31:41.901200 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.901352 kubelet[2681]: W0117 01:31:41.901221 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.901352 kubelet[2681]: E0117 01:31:41.901236 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.901687 kubelet[2681]: E0117 01:31:41.901630 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.901687 kubelet[2681]: W0117 01:31:41.901650 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.901687 kubelet[2681]: E0117 01:31:41.901667 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.903216 kubelet[2681]: E0117 01:31:41.903144 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.903216 kubelet[2681]: W0117 01:31:41.903180 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.903216 kubelet[2681]: E0117 01:31:41.903199 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.903918 kubelet[2681]: E0117 01:31:41.903769 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.903918 kubelet[2681]: W0117 01:31:41.903791 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.903918 kubelet[2681]: E0117 01:31:41.903807 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.904469 kubelet[2681]: E0117 01:31:41.904259 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.904469 kubelet[2681]: W0117 01:31:41.904286 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.904469 kubelet[2681]: E0117 01:31:41.904301 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.914558 kubelet[2681]: E0117 01:31:41.914523 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.914558 kubelet[2681]: W0117 01:31:41.914551 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.914730 kubelet[2681]: E0117 01:31:41.914596 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.915112 kubelet[2681]: E0117 01:31:41.914849 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.915112 kubelet[2681]: W0117 01:31:41.914887 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.915112 kubelet[2681]: E0117 01:31:41.914903 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.915302 kubelet[2681]: E0117 01:31:41.915186 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.915302 kubelet[2681]: W0117 01:31:41.915205 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.915302 kubelet[2681]: E0117 01:31:41.915231 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.918843 kubelet[2681]: E0117 01:31:41.915493 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.918843 kubelet[2681]: W0117 01:31:41.915507 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.918843 kubelet[2681]: E0117 01:31:41.915521 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.918843 kubelet[2681]: E0117 01:31:41.916076 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.918843 kubelet[2681]: W0117 01:31:41.916091 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.918843 kubelet[2681]: E0117 01:31:41.916122 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.918843 kubelet[2681]: E0117 01:31:41.916421 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.918843 kubelet[2681]: W0117 01:31:41.916435 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.918843 kubelet[2681]: E0117 01:31:41.916464 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.920417 kubelet[2681]: E0117 01:31:41.920389 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.920417 kubelet[2681]: W0117 01:31:41.920412 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.920551 kubelet[2681]: E0117 01:31:41.920431 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.921723 kubelet[2681]: E0117 01:31:41.920707 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.921723 kubelet[2681]: W0117 01:31:41.920727 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.921723 kubelet[2681]: E0117 01:31:41.920743 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.921723 kubelet[2681]: E0117 01:31:41.921061 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.921723 kubelet[2681]: W0117 01:31:41.921076 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.921723 kubelet[2681]: E0117 01:31:41.921090 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.921723 kubelet[2681]: E0117 01:31:41.921646 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.921723 kubelet[2681]: W0117 01:31:41.921661 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.921723 kubelet[2681]: E0117 01:31:41.921675 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:41.922479 kubelet[2681]: E0117 01:31:41.921945 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:41.922479 kubelet[2681]: W0117 01:31:41.921959 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:41.922479 kubelet[2681]: E0117 01:31:41.921974 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.672402 kubelet[2681]: E0117 01:31:42.671799 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:31:42.839135 kubelet[2681]: I0117 01:31:42.838354 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 01:31:42.902717 kubelet[2681]: E0117 01:31:42.902088 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.902717 kubelet[2681]: W0117 01:31:42.902166 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.902717 kubelet[2681]: E0117 01:31:42.902231 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.903512 kubelet[2681]: E0117 01:31:42.902938 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.903512 kubelet[2681]: W0117 01:31:42.902954 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.903512 kubelet[2681]: E0117 01:31:42.902970 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.903717 kubelet[2681]: E0117 01:31:42.903590 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.903717 kubelet[2681]: W0117 01:31:42.903665 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.903717 kubelet[2681]: E0117 01:31:42.903684 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.905303 kubelet[2681]: E0117 01:31:42.904800 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.905303 kubelet[2681]: W0117 01:31:42.904974 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.905303 kubelet[2681]: E0117 01:31:42.905020 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.906053 kubelet[2681]: E0117 01:31:42.906019 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.906053 kubelet[2681]: W0117 01:31:42.906045 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.906197 kubelet[2681]: E0117 01:31:42.906062 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.906385 kubelet[2681]: E0117 01:31:42.906361 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.906385 kubelet[2681]: W0117 01:31:42.906381 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.906488 kubelet[2681]: E0117 01:31:42.906397 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.906878 kubelet[2681]: E0117 01:31:42.906853 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.906878 kubelet[2681]: W0117 01:31:42.906874 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.907002 kubelet[2681]: E0117 01:31:42.906895 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.907237 kubelet[2681]: E0117 01:31:42.907215 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.907305 kubelet[2681]: W0117 01:31:42.907236 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.907305 kubelet[2681]: E0117 01:31:42.907252 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.907567 kubelet[2681]: E0117 01:31:42.907534 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.907615 kubelet[2681]: W0117 01:31:42.907564 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.907615 kubelet[2681]: E0117 01:31:42.907582 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.907994 kubelet[2681]: E0117 01:31:42.907971 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.908095 kubelet[2681]: W0117 01:31:42.907991 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.908177 kubelet[2681]: E0117 01:31:42.908102 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.909973 kubelet[2681]: E0117 01:31:42.908543 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.909973 kubelet[2681]: W0117 01:31:42.908631 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.909973 kubelet[2681]: E0117 01:31:42.908652 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.909973 kubelet[2681]: E0117 01:31:42.908982 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.909973 kubelet[2681]: W0117 01:31:42.909095 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.909973 kubelet[2681]: E0117 01:31:42.909138 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.909973 kubelet[2681]: E0117 01:31:42.909500 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.909973 kubelet[2681]: W0117 01:31:42.909625 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.909973 kubelet[2681]: E0117 01:31:42.909647 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.909973 kubelet[2681]: E0117 01:31:42.909952 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.910617 kubelet[2681]: W0117 01:31:42.909966 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.910617 kubelet[2681]: E0117 01:31:42.909980 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.910617 kubelet[2681]: E0117 01:31:42.910424 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.910617 kubelet[2681]: W0117 01:31:42.910438 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.910617 kubelet[2681]: E0117 01:31:42.910453 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.911187 kubelet[2681]: E0117 01:31:42.911167 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.911187 kubelet[2681]: W0117 01:31:42.911187 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.911353 kubelet[2681]: E0117 01:31:42.911204 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.911696 kubelet[2681]: E0117 01:31:42.911677 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.911696 kubelet[2681]: W0117 01:31:42.911695 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.911833 kubelet[2681]: E0117 01:31:42.911728 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.912196 kubelet[2681]: E0117 01:31:42.912172 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.912295 kubelet[2681]: W0117 01:31:42.912196 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.912295 kubelet[2681]: E0117 01:31:42.912221 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.912780 kubelet[2681]: E0117 01:31:42.912760 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.912780 kubelet[2681]: W0117 01:31:42.912779 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.912904 kubelet[2681]: E0117 01:31:42.912812 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.913151 kubelet[2681]: E0117 01:31:42.913097 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.913244 kubelet[2681]: W0117 01:31:42.913216 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.913320 kubelet[2681]: E0117 01:31:42.913299 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.913608 kubelet[2681]: E0117 01:31:42.913588 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.913608 kubelet[2681]: W0117 01:31:42.913607 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.913772 kubelet[2681]: E0117 01:31:42.913744 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.914067 kubelet[2681]: E0117 01:31:42.914036 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.914067 kubelet[2681]: W0117 01:31:42.914061 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.914340 kubelet[2681]: E0117 01:31:42.914148 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.914662 kubelet[2681]: E0117 01:31:42.914631 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.914662 kubelet[2681]: W0117 01:31:42.914654 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.914797 kubelet[2681]: E0117 01:31:42.914677 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.915400 kubelet[2681]: E0117 01:31:42.915149 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.915400 kubelet[2681]: W0117 01:31:42.915169 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.915400 kubelet[2681]: E0117 01:31:42.915187 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.915898 kubelet[2681]: E0117 01:31:42.915535 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.915898 kubelet[2681]: W0117 01:31:42.915552 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.915898 kubelet[2681]: E0117 01:31:42.915588 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.916606 kubelet[2681]: E0117 01:31:42.916347 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.916606 kubelet[2681]: W0117 01:31:42.916366 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.916606 kubelet[2681]: E0117 01:31:42.916433 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.917000 kubelet[2681]: E0117 01:31:42.916839 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.917000 kubelet[2681]: W0117 01:31:42.916858 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.917000 kubelet[2681]: E0117 01:31:42.916896 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.917513 kubelet[2681]: E0117 01:31:42.917380 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.917513 kubelet[2681]: W0117 01:31:42.917405 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.917513 kubelet[2681]: E0117 01:31:42.917439 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.918035 kubelet[2681]: E0117 01:31:42.917879 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.918035 kubelet[2681]: W0117 01:31:42.917897 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.918035 kubelet[2681]: E0117 01:31:42.917953 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.918320 kubelet[2681]: E0117 01:31:42.918300 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.918610 kubelet[2681]: W0117 01:31:42.918414 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.918610 kubelet[2681]: E0117 01:31:42.918453 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.918812 kubelet[2681]: E0117 01:31:42.918793 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.918940 kubelet[2681]: W0117 01:31:42.918914 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.919064 kubelet[2681]: E0117 01:31:42.919044 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.919516 kubelet[2681]: E0117 01:31:42.919496 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.920182 kubelet[2681]: W0117 01:31:42.919631 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.920182 kubelet[2681]: E0117 01:31:42.919658 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:42.920517 kubelet[2681]: E0117 01:31:42.920498 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:31:42.920647 kubelet[2681]: W0117 01:31:42.920617 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:31:42.920754 kubelet[2681]: E0117 01:31:42.920734 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:31:43.153460 containerd[1507]: time="2026-01-17T01:31:43.153376439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:43.155151 containerd[1507]: time="2026-01-17T01:31:43.155050917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 01:31:43.156603 containerd[1507]: time="2026-01-17T01:31:43.156528637Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:43.162282 containerd[1507]: time="2026-01-17T01:31:43.162246456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:43.163778 containerd[1507]: time="2026-01-17T01:31:43.163420401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.443494866s" Jan 17 01:31:43.163778 containerd[1507]: time="2026-01-17T01:31:43.163471918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 01:31:43.168183 containerd[1507]: time="2026-01-17T01:31:43.168153111Z" level=info msg="CreateContainer within sandbox \"d456d3daacdbc4138901ca9e5d3bfb4fbfc02a6236e444b9f8661c676dee854d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 01:31:43.210183 containerd[1507]: time="2026-01-17T01:31:43.209528147Z" level=info msg="CreateContainer within sandbox \"d456d3daacdbc4138901ca9e5d3bfb4fbfc02a6236e444b9f8661c676dee854d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1518172b0fef1a1ef4c34cd3c294b96fcad3696200c78ef0c91cfd9f078b5d41\"" Jan 17 01:31:43.215952 containerd[1507]: time="2026-01-17T01:31:43.212987136Z" level=info msg="StartContainer for \"1518172b0fef1a1ef4c34cd3c294b96fcad3696200c78ef0c91cfd9f078b5d41\"" Jan 17 01:31:43.273413 systemd[1]: Started cri-containerd-1518172b0fef1a1ef4c34cd3c294b96fcad3696200c78ef0c91cfd9f078b5d41.scope - libcontainer container 1518172b0fef1a1ef4c34cd3c294b96fcad3696200c78ef0c91cfd9f078b5d41. Jan 17 01:31:43.331725 containerd[1507]: time="2026-01-17T01:31:43.331572246Z" level=info msg="StartContainer for \"1518172b0fef1a1ef4c34cd3c294b96fcad3696200c78ef0c91cfd9f078b5d41\" returns successfully" Jan 17 01:31:43.356717 systemd[1]: cri-containerd-1518172b0fef1a1ef4c34cd3c294b96fcad3696200c78ef0c91cfd9f078b5d41.scope: Deactivated successfully. Jan 17 01:31:43.411185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1518172b0fef1a1ef4c34cd3c294b96fcad3696200c78ef0c91cfd9f078b5d41-rootfs.mount: Deactivated successfully. Jan 17 01:31:43.463881 containerd[1507]: time="2026-01-17T01:31:43.449898673Z" level=info msg="shim disconnected" id=1518172b0fef1a1ef4c34cd3c294b96fcad3696200c78ef0c91cfd9f078b5d41 namespace=k8s.io Jan 17 01:31:43.463881 containerd[1507]: time="2026-01-17T01:31:43.463588185Z" level=warning msg="cleaning up after shim disconnected" id=1518172b0fef1a1ef4c34cd3c294b96fcad3696200c78ef0c91cfd9f078b5d41 namespace=k8s.io Jan 17 01:31:43.463881 containerd[1507]: time="2026-01-17T01:31:43.463626144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:31:43.488481 containerd[1507]: time="2026-01-17T01:31:43.488352504Z" level=warning msg="cleanup warnings time=\"2026-01-17T01:31:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 01:31:43.846866 containerd[1507]: time="2026-01-17T01:31:43.846660147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 01:31:44.672895 kubelet[2681]: E0117 01:31:44.671365 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:31:45.530187 kubelet[2681]: I0117 01:31:45.529564 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 01:31:46.676949 kubelet[2681]: E0117 01:31:46.676883 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:31:48.672520 kubelet[2681]: E0117 01:31:48.671849 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:31:48.725307 containerd[1507]: time="2026-01-17T01:31:48.725192515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:48.743665 containerd[1507]: time="2026-01-17T01:31:48.743317018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 01:31:48.743665 containerd[1507]: time="2026-01-17T01:31:48.743556877Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:48.747887 containerd[1507]: time="2026-01-17T01:31:48.747851504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:31:48.749925 containerd[1507]: time="2026-01-17T01:31:48.748988424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.902202521s" Jan 17 01:31:48.749925 containerd[1507]: time="2026-01-17T01:31:48.749666851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 01:31:48.753549 containerd[1507]: time="2026-01-17T01:31:48.753503683Z" level=info msg="CreateContainer within sandbox \"d456d3daacdbc4138901ca9e5d3bfb4fbfc02a6236e444b9f8661c676dee854d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 01:31:48.786808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount512047223.mount: Deactivated successfully. Jan 17 01:31:48.804842 containerd[1507]: time="2026-01-17T01:31:48.804718195Z" level=info msg="CreateContainer within sandbox \"d456d3daacdbc4138901ca9e5d3bfb4fbfc02a6236e444b9f8661c676dee854d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"433c1dead5bf30db32a5e89a3e5abfe5460e4162cbc1ba8e454311c648c32986\"" Jan 17 01:31:48.806340 containerd[1507]: time="2026-01-17T01:31:48.805902016Z" level=info msg="StartContainer for \"433c1dead5bf30db32a5e89a3e5abfe5460e4162cbc1ba8e454311c648c32986\"" Jan 17 01:31:48.877661 systemd[1]: Started cri-containerd-433c1dead5bf30db32a5e89a3e5abfe5460e4162cbc1ba8e454311c648c32986.scope - libcontainer container 433c1dead5bf30db32a5e89a3e5abfe5460e4162cbc1ba8e454311c648c32986. Jan 17 01:31:48.938670 containerd[1507]: time="2026-01-17T01:31:48.938610409Z" level=info msg="StartContainer for \"433c1dead5bf30db32a5e89a3e5abfe5460e4162cbc1ba8e454311c648c32986\" returns successfully" Jan 17 01:31:49.935390 systemd[1]: cri-containerd-433c1dead5bf30db32a5e89a3e5abfe5460e4162cbc1ba8e454311c648c32986.scope: Deactivated successfully. Jan 17 01:31:50.004979 kubelet[2681]: I0117 01:31:50.004870 2681 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 01:31:50.055620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-433c1dead5bf30db32a5e89a3e5abfe5460e4162cbc1ba8e454311c648c32986-rootfs.mount: Deactivated successfully. Jan 17 01:31:50.189541 containerd[1507]: time="2026-01-17T01:31:50.188890521Z" level=info msg="shim disconnected" id=433c1dead5bf30db32a5e89a3e5abfe5460e4162cbc1ba8e454311c648c32986 namespace=k8s.io Jan 17 01:31:50.189541 containerd[1507]: time="2026-01-17T01:31:50.189138545Z" level=warning msg="cleaning up after shim disconnected" id=433c1dead5bf30db32a5e89a3e5abfe5460e4162cbc1ba8e454311c648c32986 namespace=k8s.io Jan 17 01:31:50.189541 containerd[1507]: time="2026-01-17T01:31:50.189159237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:31:50.299934 kubelet[2681]: I0117 01:31:50.297330 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/307224a1-2fa9-44a1-ad77-684cc2300054-calico-apiserver-certs\") pod \"calico-apiserver-6594455f78-c92j5\" (UID: \"307224a1-2fa9-44a1-ad77-684cc2300054\") " pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" Jan 17 01:31:50.299934 kubelet[2681]: I0117 01:31:50.297442 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50610d67-f39f-4c35-8e6d-c6596bfafc13-tigera-ca-bundle\") pod \"calico-kube-controllers-c7d5775b-t6c4g\" (UID: \"50610d67-f39f-4c35-8e6d-c6596bfafc13\") " pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" Jan 17 01:31:50.299934 kubelet[2681]: I0117 01:31:50.297482 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52hqc\" (UniqueName: \"kubernetes.io/projected/60848184-c0a6-4a54-ba29-7889e424733e-kube-api-access-52hqc\") pod \"coredns-668d6bf9bc-9fkwl\" (UID: \"60848184-c0a6-4a54-ba29-7889e424733e\") " pod="kube-system/coredns-668d6bf9bc-9fkwl" Jan 17 01:31:50.299934 kubelet[2681]: I0117 01:31:50.297518 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a82681c-7367-4e06-9a33-de4eeb86a08d-config-volume\") pod \"coredns-668d6bf9bc-j2ntp\" (UID: \"4a82681c-7367-4e06-9a33-de4eeb86a08d\") " pod="kube-system/coredns-668d6bf9bc-j2ntp" Jan 17 01:31:50.299934 kubelet[2681]: I0117 01:31:50.297554 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78wnt\" (UniqueName: \"kubernetes.io/projected/4a82681c-7367-4e06-9a33-de4eeb86a08d-kube-api-access-78wnt\") pod \"coredns-668d6bf9bc-j2ntp\" (UID: \"4a82681c-7367-4e06-9a33-de4eeb86a08d\") " pod="kube-system/coredns-668d6bf9bc-j2ntp" Jan 17 01:31:50.300418 kubelet[2681]: I0117 01:31:50.297584 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6rcl\" (UniqueName: \"kubernetes.io/projected/307224a1-2fa9-44a1-ad77-684cc2300054-kube-api-access-f6rcl\") pod \"calico-apiserver-6594455f78-c92j5\" (UID: \"307224a1-2fa9-44a1-ad77-684cc2300054\") " pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" Jan 17 01:31:50.300418 kubelet[2681]: I0117 01:31:50.297613 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/166a9b11-96f4-43ec-a822-357f748e3c20-calico-apiserver-certs\") pod \"calico-apiserver-6594455f78-9rjtt\" (UID: \"166a9b11-96f4-43ec-a822-357f748e3c20\") " pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" Jan 17 01:31:50.300418 kubelet[2681]: I0117 01:31:50.297641 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60848184-c0a6-4a54-ba29-7889e424733e-config-volume\") pod \"coredns-668d6bf9bc-9fkwl\" (UID: \"60848184-c0a6-4a54-ba29-7889e424733e\") " pod="kube-system/coredns-668d6bf9bc-9fkwl" Jan 17 01:31:50.300418 kubelet[2681]: I0117 01:31:50.297672 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45pt\" (UniqueName: \"kubernetes.io/projected/166a9b11-96f4-43ec-a822-357f748e3c20-kube-api-access-c45pt\") pod \"calico-apiserver-6594455f78-9rjtt\" (UID: \"166a9b11-96f4-43ec-a822-357f748e3c20\") " pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" Jan 17 01:31:50.300418 kubelet[2681]: I0117 01:31:50.297707 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz5x7\" (UniqueName: \"kubernetes.io/projected/50610d67-f39f-4c35-8e6d-c6596bfafc13-kube-api-access-qz5x7\") pod \"calico-kube-controllers-c7d5775b-t6c4g\" (UID: \"50610d67-f39f-4c35-8e6d-c6596bfafc13\") " pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" Jan 17 01:31:50.324500 systemd[1]: Created slice kubepods-burstable-pod60848184_c0a6_4a54_ba29_7889e424733e.slice - libcontainer container kubepods-burstable-pod60848184_c0a6_4a54_ba29_7889e424733e.slice. Jan 17 01:31:50.330922 kubelet[2681]: W0117 01:31:50.329144 2681 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:srv-dv3jc.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-dv3jc.gb1.brightbox.com' and this object Jan 17 01:31:50.331134 kubelet[2681]: E0117 01:31:50.331064 2681 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:srv-dv3jc.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-dv3jc.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 17 01:31:50.341894 systemd[1]: Created slice kubepods-burstable-pod4a82681c_7367_4e06_9a33_de4eeb86a08d.slice - libcontainer container kubepods-burstable-pod4a82681c_7367_4e06_9a33_de4eeb86a08d.slice. Jan 17 01:31:50.358645 systemd[1]: Created slice kubepods-besteffort-pod307224a1_2fa9_44a1_ad77_684cc2300054.slice - libcontainer container kubepods-besteffort-pod307224a1_2fa9_44a1_ad77_684cc2300054.slice. Jan 17 01:31:50.376600 systemd[1]: Created slice kubepods-besteffort-pod50610d67_f39f_4c35_8e6d_c6596bfafc13.slice - libcontainer container kubepods-besteffort-pod50610d67_f39f_4c35_8e6d_c6596bfafc13.slice. Jan 17 01:31:50.394702 systemd[1]: Created slice kubepods-besteffort-pod166a9b11_96f4_43ec_a822_357f748e3c20.slice - libcontainer container kubepods-besteffort-pod166a9b11_96f4_43ec_a822_357f748e3c20.slice. Jan 17 01:31:50.398899 kubelet[2681]: I0117 01:31:50.398701 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kbs\" (UniqueName: \"kubernetes.io/projected/a924e366-2268-4f4b-91a1-779a1cb6d303-kube-api-access-j6kbs\") pod \"goldmane-666569f655-vdtbx\" (UID: \"a924e366-2268-4f4b-91a1-779a1cb6d303\") " pod="calico-system/goldmane-666569f655-vdtbx" Jan 17 01:31:50.398899 kubelet[2681]: I0117 01:31:50.398809 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/685e184f-b85a-4991-b9eb-dc7e37a24e73-whisker-backend-key-pair\") pod \"whisker-84769948d8-jfj2k\" (UID: \"685e184f-b85a-4991-b9eb-dc7e37a24e73\") " pod="calico-system/whisker-84769948d8-jfj2k" Jan 17 01:31:50.398899 kubelet[2681]: I0117 01:31:50.398872 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb4hf\" (UniqueName: \"kubernetes.io/projected/685e184f-b85a-4991-b9eb-dc7e37a24e73-kube-api-access-hb4hf\") pod \"whisker-84769948d8-jfj2k\" (UID: \"685e184f-b85a-4991-b9eb-dc7e37a24e73\") " pod="calico-system/whisker-84769948d8-jfj2k" Jan 17 01:31:50.399131 kubelet[2681]: I0117 01:31:50.398909 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a924e366-2268-4f4b-91a1-779a1cb6d303-goldmane-key-pair\") pod \"goldmane-666569f655-vdtbx\" (UID: \"a924e366-2268-4f4b-91a1-779a1cb6d303\") " pod="calico-system/goldmane-666569f655-vdtbx" Jan 17 01:31:50.399131 kubelet[2681]: I0117 01:31:50.399013 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a924e366-2268-4f4b-91a1-779a1cb6d303-config\") pod \"goldmane-666569f655-vdtbx\" (UID: \"a924e366-2268-4f4b-91a1-779a1cb6d303\") " pod="calico-system/goldmane-666569f655-vdtbx" Jan 17 01:31:50.399131 kubelet[2681]: I0117 01:31:50.399085 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/685e184f-b85a-4991-b9eb-dc7e37a24e73-whisker-ca-bundle\") pod \"whisker-84769948d8-jfj2k\" (UID: \"685e184f-b85a-4991-b9eb-dc7e37a24e73\") " pod="calico-system/whisker-84769948d8-jfj2k" Jan 17 01:31:50.399304 kubelet[2681]: I0117 01:31:50.399153 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a924e366-2268-4f4b-91a1-779a1cb6d303-goldmane-ca-bundle\") pod \"goldmane-666569f655-vdtbx\" (UID: \"a924e366-2268-4f4b-91a1-779a1cb6d303\") " pod="calico-system/goldmane-666569f655-vdtbx" Jan 17 01:31:50.417811 systemd[1]: Created slice kubepods-besteffort-poda924e366_2268_4f4b_91a1_779a1cb6d303.slice - libcontainer container kubepods-besteffort-poda924e366_2268_4f4b_91a1_779a1cb6d303.slice. Jan 17 01:31:50.429512 systemd[1]: Created slice kubepods-besteffort-pod685e184f_b85a_4991_b9eb_dc7e37a24e73.slice - libcontainer container kubepods-besteffort-pod685e184f_b85a_4991_b9eb_dc7e37a24e73.slice. Jan 17 01:31:50.648412 containerd[1507]: time="2026-01-17T01:31:50.648098447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fkwl,Uid:60848184-c0a6-4a54-ba29-7889e424733e,Namespace:kube-system,Attempt:0,}" Jan 17 01:31:50.656022 containerd[1507]: time="2026-01-17T01:31:50.655354949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j2ntp,Uid:4a82681c-7367-4e06-9a33-de4eeb86a08d,Namespace:kube-system,Attempt:0,}" Jan 17 01:31:50.688631 containerd[1507]: time="2026-01-17T01:31:50.687510760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594455f78-c92j5,Uid:307224a1-2fa9-44a1-ad77-684cc2300054,Namespace:calico-apiserver,Attempt:0,}" Jan 17 01:31:50.687970 systemd[1]: Created slice kubepods-besteffort-podc16de311_2d09_4fff_8444_304a8ff3b2b5.slice - libcontainer container kubepods-besteffort-podc16de311_2d09_4fff_8444_304a8ff3b2b5.slice. Jan 17 01:31:50.692734 containerd[1507]: time="2026-01-17T01:31:50.692700442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7d5775b-t6c4g,Uid:50610d67-f39f-4c35-8e6d-c6596bfafc13,Namespace:calico-system,Attempt:0,}" Jan 17 01:31:50.694532 containerd[1507]: time="2026-01-17T01:31:50.694286956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-crszt,Uid:c16de311-2d09-4fff-8444-304a8ff3b2b5,Namespace:calico-system,Attempt:0,}" Jan 17 01:31:50.716581 containerd[1507]: time="2026-01-17T01:31:50.716489132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594455f78-9rjtt,Uid:166a9b11-96f4-43ec-a822-357f748e3c20,Namespace:calico-apiserver,Attempt:0,}" Jan 17 01:31:50.765420 containerd[1507]: time="2026-01-17T01:31:50.765229777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84769948d8-jfj2k,Uid:685e184f-b85a-4991-b9eb-dc7e37a24e73,Namespace:calico-system,Attempt:0,}" Jan 17 01:31:50.941953 containerd[1507]: time="2026-01-17T01:31:50.941787786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 01:31:51.203246 containerd[1507]: time="2026-01-17T01:31:51.201290269Z" level=error msg="Failed to destroy network for sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.207443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4-shm.mount: Deactivated successfully. Jan 17 01:31:51.218694 containerd[1507]: time="2026-01-17T01:31:51.218345569Z" level=error msg="encountered an error cleaning up failed sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.219045 containerd[1507]: time="2026-01-17T01:31:51.218864481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fkwl,Uid:60848184-c0a6-4a54-ba29-7889e424733e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.240477 kubelet[2681]: E0117 01:31:51.219602 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.243348 kubelet[2681]: E0117 01:31:51.240607 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fkwl" Jan 17 01:31:51.243477 kubelet[2681]: E0117 01:31:51.243409 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9fkwl" Jan 17 01:31:51.243681 kubelet[2681]: E0117 01:31:51.243499 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9fkwl_kube-system(60848184-c0a6-4a54-ba29-7889e424733e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9fkwl_kube-system(60848184-c0a6-4a54-ba29-7889e424733e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9fkwl" podUID="60848184-c0a6-4a54-ba29-7889e424733e" Jan 17 01:31:51.248990 containerd[1507]: time="2026-01-17T01:31:51.246018883Z" level=error msg="Failed to destroy network for sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.248990 containerd[1507]: time="2026-01-17T01:31:51.246636066Z" level=error msg="encountered an error cleaning up failed sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.248990 containerd[1507]: time="2026-01-17T01:31:51.246744284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594455f78-9rjtt,Uid:166a9b11-96f4-43ec-a822-357f748e3c20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.249424 kubelet[2681]: E0117 01:31:51.249382 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.249494 kubelet[2681]: E0117 01:31:51.249445 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" Jan 17 01:31:51.249494 kubelet[2681]: E0117 01:31:51.249475 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" Jan 17 01:31:51.249589 kubelet[2681]: E0117 01:31:51.249519 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6594455f78-9rjtt_calico-apiserver(166a9b11-96f4-43ec-a822-357f748e3c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6594455f78-9rjtt_calico-apiserver(166a9b11-96f4-43ec-a822-357f748e3c20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:31:51.250720 containerd[1507]: time="2026-01-17T01:31:51.250416746Z" level=error msg="Failed to destroy network for sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.252593 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d-shm.mount: Deactivated successfully. Jan 17 01:31:51.262399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b-shm.mount: Deactivated successfully. Jan 17 01:31:51.264649 containerd[1507]: time="2026-01-17T01:31:51.264268447Z" level=error msg="encountered an error cleaning up failed sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.265094 containerd[1507]: time="2026-01-17T01:31:51.264887899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-crszt,Uid:c16de311-2d09-4fff-8444-304a8ff3b2b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.265718 kubelet[2681]: E0117 01:31:51.265663 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.265826 kubelet[2681]: E0117 01:31:51.265741 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-crszt" Jan 17 01:31:51.265932 kubelet[2681]: E0117 01:31:51.265826 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-crszt" Jan 17 01:31:51.265994 kubelet[2681]: E0117 01:31:51.265918 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-crszt_calico-system(c16de311-2d09-4fff-8444-304a8ff3b2b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-crszt_calico-system(c16de311-2d09-4fff-8444-304a8ff3b2b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:31:51.273730 containerd[1507]: time="2026-01-17T01:31:51.273548457Z" level=error msg="Failed to destroy network for sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.278433 containerd[1507]: time="2026-01-17T01:31:51.277636419Z" level=error msg="encountered an error cleaning up failed sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.278433 containerd[1507]: time="2026-01-17T01:31:51.277939341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j2ntp,Uid:4a82681c-7367-4e06-9a33-de4eeb86a08d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.283982 kubelet[2681]: E0117 01:31:51.278793 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.283982 kubelet[2681]: E0117 01:31:51.278955 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j2ntp" Jan 17 01:31:51.283982 kubelet[2681]: E0117 01:31:51.278987 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j2ntp" Jan 17 01:31:51.282844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92-shm.mount: Deactivated successfully. Jan 17 01:31:51.284466 containerd[1507]: time="2026-01-17T01:31:51.278541986Z" level=error msg="Failed to destroy network for sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.284466 containerd[1507]: time="2026-01-17T01:31:51.282097177Z" level=error msg="encountered an error cleaning up failed sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.284466 containerd[1507]: time="2026-01-17T01:31:51.282204451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7d5775b-t6c4g,Uid:50610d67-f39f-4c35-8e6d-c6596bfafc13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.285024 kubelet[2681]: E0117 01:31:51.279047 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j2ntp_kube-system(4a82681c-7367-4e06-9a33-de4eeb86a08d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j2ntp_kube-system(4a82681c-7367-4e06-9a33-de4eeb86a08d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j2ntp" podUID="4a82681c-7367-4e06-9a33-de4eeb86a08d" Jan 17 01:31:51.285024 kubelet[2681]: E0117 01:31:51.282448 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.285024 kubelet[2681]: E0117 01:31:51.282512 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" Jan 17 01:31:51.285260 kubelet[2681]: E0117 01:31:51.282539 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" Jan 17 01:31:51.285260 kubelet[2681]: E0117 01:31:51.282593 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c7d5775b-t6c4g_calico-system(50610d67-f39f-4c35-8e6d-c6596bfafc13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c7d5775b-t6c4g_calico-system(50610d67-f39f-4c35-8e6d-c6596bfafc13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:31:51.293712 containerd[1507]: time="2026-01-17T01:31:51.293627897Z" level=error msg="Failed to destroy network for sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.294793 containerd[1507]: time="2026-01-17T01:31:51.294554088Z" level=error msg="encountered an error cleaning up failed sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.295510 containerd[1507]: time="2026-01-17T01:31:51.295334203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84769948d8-jfj2k,Uid:685e184f-b85a-4991-b9eb-dc7e37a24e73,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.297280 kubelet[2681]: E0117 01:31:51.295773 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.297280 kubelet[2681]: E0117 01:31:51.295851 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84769948d8-jfj2k" Jan 17 01:31:51.297280 kubelet[2681]: E0117 01:31:51.295888 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84769948d8-jfj2k" Jan 17 01:31:51.297530 kubelet[2681]: E0117 01:31:51.295966 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84769948d8-jfj2k_calico-system(685e184f-b85a-4991-b9eb-dc7e37a24e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84769948d8-jfj2k_calico-system(685e184f-b85a-4991-b9eb-dc7e37a24e73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84769948d8-jfj2k" podUID="685e184f-b85a-4991-b9eb-dc7e37a24e73" Jan 17 01:31:51.307138 containerd[1507]: time="2026-01-17T01:31:51.307065873Z" level=error msg="Failed to destroy network for sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.307739 containerd[1507]: time="2026-01-17T01:31:51.307639063Z" level=error msg="encountered an error cleaning up failed sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.308919 containerd[1507]: time="2026-01-17T01:31:51.308732201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594455f78-c92j5,Uid:307224a1-2fa9-44a1-ad77-684cc2300054,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.309499 kubelet[2681]: E0117 01:31:51.309306 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.309499 kubelet[2681]: E0117 01:31:51.309408 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" Jan 17 01:31:51.309499 kubelet[2681]: E0117 01:31:51.309461 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" Jan 17 01:31:51.310317 kubelet[2681]: E0117 01:31:51.310222 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6594455f78-c92j5_calico-apiserver(307224a1-2fa9-44a1-ad77-684cc2300054)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6594455f78-c92j5_calico-apiserver(307224a1-2fa9-44a1-ad77-684cc2300054)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:31:51.626981 containerd[1507]: time="2026-01-17T01:31:51.626753830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vdtbx,Uid:a924e366-2268-4f4b-91a1-779a1cb6d303,Namespace:calico-system,Attempt:0,}" Jan 17 01:31:51.718202 containerd[1507]: time="2026-01-17T01:31:51.718110019Z" level=error msg="Failed to destroy network for sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.718872 containerd[1507]: time="2026-01-17T01:31:51.718795453Z" level=error msg="encountered an error cleaning up failed sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.719035 containerd[1507]: time="2026-01-17T01:31:51.718967574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vdtbx,Uid:a924e366-2268-4f4b-91a1-779a1cb6d303,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.719534 kubelet[2681]: E0117 01:31:51.719461 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:51.719612 kubelet[2681]: E0117 01:31:51.719570 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vdtbx" Jan 17 01:31:51.719711 kubelet[2681]: E0117 01:31:51.719629 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vdtbx" Jan 17 01:31:51.719769 kubelet[2681]: E0117 01:31:51.719717 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vdtbx_calico-system(a924e366-2268-4f4b-91a1-779a1cb6d303)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vdtbx_calico-system(a924e366-2268-4f4b-91a1-779a1cb6d303)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:31:51.917693 kubelet[2681]: I0117 01:31:51.916845 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:31:51.919835 kubelet[2681]: I0117 01:31:51.919791 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:31:51.931095 kubelet[2681]: I0117 01:31:51.931011 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:31:51.957228 containerd[1507]: time="2026-01-17T01:31:51.956918303Z" level=info msg="StopPodSandbox for \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\"" Jan 17 01:31:51.959777 kubelet[2681]: I0117 01:31:51.958755 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:31:51.961870 containerd[1507]: time="2026-01-17T01:31:51.959031993Z" level=info msg="Ensure that sandbox a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b in task-service has been cleanup successfully" Jan 17 01:31:51.964039 containerd[1507]: time="2026-01-17T01:31:51.963735053Z" level=info msg="StopPodSandbox for \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\"" Jan 17 01:31:51.966432 containerd[1507]: time="2026-01-17T01:31:51.966386925Z" level=info msg="Ensure that sandbox 133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d in task-service has been cleanup successfully" Jan 17 01:31:51.988154 containerd[1507]: time="2026-01-17T01:31:51.986942170Z" level=info msg="StopPodSandbox for \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\"" Jan 17 01:31:51.988154 containerd[1507]: time="2026-01-17T01:31:51.987367956Z" level=info msg="Ensure that sandbox 2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23 in task-service has been cleanup successfully" Jan 17 01:31:51.989224 containerd[1507]: time="2026-01-17T01:31:51.986940809Z" level=info msg="StopPodSandbox for \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\"" Jan 17 01:31:51.989892 containerd[1507]: time="2026-01-17T01:31:51.989863334Z" level=info msg="Ensure that sandbox cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4 in task-service has been cleanup successfully" Jan 17 01:31:51.993690 kubelet[2681]: I0117 01:31:51.991605 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:31:51.993799 containerd[1507]: time="2026-01-17T01:31:51.992809206Z" level=info msg="StopPodSandbox for \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\"" Jan 17 01:31:51.993799 containerd[1507]: time="2026-01-17T01:31:51.993471603Z" level=info msg="Ensure that sandbox df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3 in task-service has been cleanup successfully" Jan 17 01:31:52.000834 kubelet[2681]: I0117 01:31:52.000787 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:31:52.009194 containerd[1507]: time="2026-01-17T01:31:52.004256610Z" level=info msg="StopPodSandbox for \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\"" Jan 17 01:31:52.022936 containerd[1507]: time="2026-01-17T01:31:52.022368198Z" level=info msg="Ensure that sandbox 0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276 in task-service has been cleanup successfully" Jan 17 01:31:52.050405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23-shm.mount: Deactivated successfully. Jan 17 01:31:52.050593 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3-shm.mount: Deactivated successfully. Jan 17 01:31:52.050719 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c-shm.mount: Deactivated successfully. Jan 17 01:31:52.084967 kubelet[2681]: I0117 01:31:52.084879 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:31:52.093850 containerd[1507]: time="2026-01-17T01:31:52.085931780Z" level=info msg="StopPodSandbox for \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\"" Jan 17 01:31:52.093850 containerd[1507]: time="2026-01-17T01:31:52.090823915Z" level=info msg="Ensure that sandbox b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c in task-service has been cleanup successfully" Jan 17 01:31:52.108086 kubelet[2681]: I0117 01:31:52.106557 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:31:52.114827 containerd[1507]: time="2026-01-17T01:31:52.114331444Z" level=info msg="StopPodSandbox for \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\"" Jan 17 01:31:52.114827 containerd[1507]: time="2026-01-17T01:31:52.114683639Z" level=info msg="Ensure that sandbox ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92 in task-service has been cleanup successfully" Jan 17 01:31:52.198540 containerd[1507]: time="2026-01-17T01:31:52.198466732Z" level=error msg="StopPodSandbox for \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\" failed" error="failed to destroy network for sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:52.199398 kubelet[2681]: E0117 01:31:52.199144 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:31:52.204853 kubelet[2681]: E0117 01:31:52.199248 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4"} Jan 17 01:31:52.204853 kubelet[2681]: E0117 01:31:52.204546 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60848184-c0a6-4a54-ba29-7889e424733e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:31:52.204853 kubelet[2681]: E0117 01:31:52.204591 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60848184-c0a6-4a54-ba29-7889e424733e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9fkwl" podUID="60848184-c0a6-4a54-ba29-7889e424733e" Jan 17 01:31:52.210945 containerd[1507]: time="2026-01-17T01:31:52.210826542Z" level=error msg="StopPodSandbox for \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\" failed" error="failed to destroy network for sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:52.212627 kubelet[2681]: E0117 01:31:52.212235 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:31:52.212627 kubelet[2681]: E0117 01:31:52.212328 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23"} Jan 17 01:31:52.212627 kubelet[2681]: E0117 01:31:52.212398 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"685e184f-b85a-4991-b9eb-dc7e37a24e73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:31:52.212627 kubelet[2681]: E0117 01:31:52.212579 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"685e184f-b85a-4991-b9eb-dc7e37a24e73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84769948d8-jfj2k" podUID="685e184f-b85a-4991-b9eb-dc7e37a24e73" Jan 17 01:31:52.249433 containerd[1507]: time="2026-01-17T01:31:52.249150777Z" level=error msg="StopPodSandbox for \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\" failed" error="failed to destroy network for sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:52.253030 containerd[1507]: time="2026-01-17T01:31:52.250864747Z" level=error msg="StopPodSandbox for \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\" failed" error="failed to destroy network for sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:52.253153 kubelet[2681]: E0117 01:31:52.249891 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:31:52.253153 kubelet[2681]: E0117 01:31:52.250051 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d"} Jan 17 01:31:52.253153 kubelet[2681]: E0117 01:31:52.250148 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"166a9b11-96f4-43ec-a822-357f748e3c20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:31:52.253153 kubelet[2681]: E0117 01:31:52.250210 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"166a9b11-96f4-43ec-a822-357f748e3c20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:31:52.254489 kubelet[2681]: E0117 01:31:52.251447 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:31:52.254489 kubelet[2681]: E0117 01:31:52.251510 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b"} Jan 17 01:31:52.254489 kubelet[2681]: E0117 01:31:52.251558 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c16de311-2d09-4fff-8444-304a8ff3b2b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:31:52.254489 kubelet[2681]: E0117 01:31:52.251659 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c16de311-2d09-4fff-8444-304a8ff3b2b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:31:52.264046 containerd[1507]: time="2026-01-17T01:31:52.262307248Z" level=error msg="StopPodSandbox for \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\" failed" error="failed to destroy network for sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:52.264396 kubelet[2681]: E0117 01:31:52.262867 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:31:52.264396 kubelet[2681]: E0117 01:31:52.262966 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276"} Jan 17 01:31:52.264396 kubelet[2681]: E0117 01:31:52.263029 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a924e366-2268-4f4b-91a1-779a1cb6d303\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:31:52.264396 kubelet[2681]: E0117 01:31:52.263067 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a924e366-2268-4f4b-91a1-779a1cb6d303\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:31:52.278779 containerd[1507]: time="2026-01-17T01:31:52.278692911Z" level=error msg="StopPodSandbox for \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\" failed" error="failed to destroy network for sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:52.279544 kubelet[2681]: E0117 01:31:52.279477 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:31:52.280184 kubelet[2681]: E0117 01:31:52.279969 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92"} Jan 17 01:31:52.280184 kubelet[2681]: E0117 01:31:52.280056 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a82681c-7367-4e06-9a33-de4eeb86a08d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:31:52.280184 kubelet[2681]: E0117 01:31:52.280105 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a82681c-7367-4e06-9a33-de4eeb86a08d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j2ntp" podUID="4a82681c-7367-4e06-9a33-de4eeb86a08d" Jan 17 01:31:52.280474 kubelet[2681]: E0117 01:31:52.280418 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:31:52.280474 kubelet[2681]: E0117 01:31:52.280459 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3"} Jan 17 01:31:52.280567 containerd[1507]: time="2026-01-17T01:31:52.280159935Z" level=error msg="StopPodSandbox for \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\" failed" error="failed to destroy network for sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:52.280630 kubelet[2681]: E0117 01:31:52.280495 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"307224a1-2fa9-44a1-ad77-684cc2300054\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:31:52.280630 kubelet[2681]: E0117 01:31:52.280525 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"307224a1-2fa9-44a1-ad77-684cc2300054\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:31:52.292770 containerd[1507]: time="2026-01-17T01:31:52.292583763Z" level=error msg="StopPodSandbox for \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\" failed" error="failed to destroy network for sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:31:52.293086 kubelet[2681]: E0117 01:31:52.293013 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:31:52.293200 kubelet[2681]: E0117 01:31:52.293091 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c"} Jan 17 01:31:52.293268 kubelet[2681]: E0117 01:31:52.293202 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50610d67-f39f-4c35-8e6d-c6596bfafc13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:31:52.293376 kubelet[2681]: E0117 01:31:52.293260 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50610d67-f39f-4c35-8e6d-c6596bfafc13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:32:00.516796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443157748.mount: Deactivated successfully. Jan 17 01:32:00.650174 containerd[1507]: time="2026-01-17T01:32:00.650091028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:32:00.658970 containerd[1507]: time="2026-01-17T01:32:00.658068955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 01:32:00.679093 containerd[1507]: time="2026-01-17T01:32:00.679044295Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:32:00.688348 containerd[1507]: time="2026-01-17T01:32:00.688299454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:32:00.691455 containerd[1507]: time="2026-01-17T01:32:00.691369066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.747412967s" Jan 17 01:32:00.691455 containerd[1507]: time="2026-01-17T01:32:00.691435537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 01:32:00.745756 containerd[1507]: time="2026-01-17T01:32:00.745432418Z" level=info msg="CreateContainer within sandbox \"d456d3daacdbc4138901ca9e5d3bfb4fbfc02a6236e444b9f8661c676dee854d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 01:32:00.799168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596432755.mount: Deactivated successfully. Jan 17 01:32:00.807561 containerd[1507]: time="2026-01-17T01:32:00.807488046Z" level=info msg="CreateContainer within sandbox \"d456d3daacdbc4138901ca9e5d3bfb4fbfc02a6236e444b9f8661c676dee854d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a039439259c9b76ab22fbccf8671bf924eb5c34a31000b8a951a5473fd269cb0\"" Jan 17 01:32:00.812347 containerd[1507]: time="2026-01-17T01:32:00.812306725Z" level=info msg="StartContainer for \"a039439259c9b76ab22fbccf8671bf924eb5c34a31000b8a951a5473fd269cb0\"" Jan 17 01:32:00.954381 systemd[1]: Started cri-containerd-a039439259c9b76ab22fbccf8671bf924eb5c34a31000b8a951a5473fd269cb0.scope - libcontainer container a039439259c9b76ab22fbccf8671bf924eb5c34a31000b8a951a5473fd269cb0. Jan 17 01:32:01.018782 containerd[1507]: time="2026-01-17T01:32:01.018613111Z" level=info msg="StartContainer for \"a039439259c9b76ab22fbccf8671bf924eb5c34a31000b8a951a5473fd269cb0\" returns successfully" Jan 17 01:32:01.172956 kubelet[2681]: I0117 01:32:01.169258 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5p5ts" podStartSLOduration=1.793162684 podStartE2EDuration="25.166581566s" podCreationTimestamp="2026-01-17 01:31:36 +0000 UTC" firstStartedPulling="2026-01-17 01:31:37.319071841 +0000 UTC m=+24.907189031" lastFinishedPulling="2026-01-17 01:32:00.692490721 +0000 UTC m=+48.280607913" observedRunningTime="2026-01-17 01:32:01.160674459 +0000 UTC m=+48.748791675" watchObservedRunningTime="2026-01-17 01:32:01.166581566 +0000 UTC m=+48.754698769" Jan 17 01:32:01.310687 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 01:32:01.311535 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 01:32:01.728676 containerd[1507]: time="2026-01-17T01:32:01.728498793Z" level=info msg="StopPodSandbox for \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\"" Jan 17 01:32:02.139629 kubelet[2681]: I0117 01:32:02.139287 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:01.884 [INFO][3875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:01.889 [INFO][3875] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" iface="eth0" netns="/var/run/netns/cni-17959b17-fb8b-e66f-b319-fb45246f6d93" Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:01.889 [INFO][3875] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" iface="eth0" netns="/var/run/netns/cni-17959b17-fb8b-e66f-b319-fb45246f6d93" Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:01.892 [INFO][3875] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" iface="eth0" netns="/var/run/netns/cni-17959b17-fb8b-e66f-b319-fb45246f6d93" Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:01.892 [INFO][3875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:01.892 [INFO][3875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:02.212 [INFO][3882] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" HandleID="k8s-pod-network.2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:02.221 [INFO][3882] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:02.222 [INFO][3882] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:02.240 [WARNING][3882] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" HandleID="k8s-pod-network.2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:02.240 [INFO][3882] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" HandleID="k8s-pod-network.2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:02.244 [INFO][3882] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:02.253178 containerd[1507]: 2026-01-17 01:32:02.248 [INFO][3875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:02.255443 containerd[1507]: time="2026-01-17T01:32:02.255297147Z" level=info msg="TearDown network for sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\" successfully" Jan 17 01:32:02.255443 containerd[1507]: time="2026-01-17T01:32:02.255340745Z" level=info msg="StopPodSandbox for \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\" returns successfully" Jan 17 01:32:02.262180 systemd[1]: run-netns-cni\x2d17959b17\x2dfb8b\x2de66f\x2db319\x2dfb45246f6d93.mount: Deactivated successfully. Jan 17 01:32:02.331542 kubelet[2681]: I0117 01:32:02.331225 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb4hf\" (UniqueName: \"kubernetes.io/projected/685e184f-b85a-4991-b9eb-dc7e37a24e73-kube-api-access-hb4hf\") pod \"685e184f-b85a-4991-b9eb-dc7e37a24e73\" (UID: \"685e184f-b85a-4991-b9eb-dc7e37a24e73\") " Jan 17 01:32:02.331542 kubelet[2681]: I0117 01:32:02.331535 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/685e184f-b85a-4991-b9eb-dc7e37a24e73-whisker-ca-bundle\") pod \"685e184f-b85a-4991-b9eb-dc7e37a24e73\" (UID: \"685e184f-b85a-4991-b9eb-dc7e37a24e73\") " Jan 17 01:32:02.332187 kubelet[2681]: I0117 01:32:02.331610 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/685e184f-b85a-4991-b9eb-dc7e37a24e73-whisker-backend-key-pair\") pod \"685e184f-b85a-4991-b9eb-dc7e37a24e73\" (UID: \"685e184f-b85a-4991-b9eb-dc7e37a24e73\") " Jan 17 01:32:02.339848 kubelet[2681]: I0117 01:32:02.338624 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/685e184f-b85a-4991-b9eb-dc7e37a24e73-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "685e184f-b85a-4991-b9eb-dc7e37a24e73" (UID: "685e184f-b85a-4991-b9eb-dc7e37a24e73"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 01:32:02.356184 kubelet[2681]: I0117 01:32:02.355747 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685e184f-b85a-4991-b9eb-dc7e37a24e73-kube-api-access-hb4hf" (OuterVolumeSpecName: "kube-api-access-hb4hf") pod "685e184f-b85a-4991-b9eb-dc7e37a24e73" (UID: "685e184f-b85a-4991-b9eb-dc7e37a24e73"). InnerVolumeSpecName "kube-api-access-hb4hf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 01:32:02.358758 kubelet[2681]: I0117 01:32:02.357436 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685e184f-b85a-4991-b9eb-dc7e37a24e73-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "685e184f-b85a-4991-b9eb-dc7e37a24e73" (UID: "685e184f-b85a-4991-b9eb-dc7e37a24e73"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 01:32:02.358366 systemd[1]: var-lib-kubelet-pods-685e184f\x2db85a\x2d4991\x2db9eb\x2ddc7e37a24e73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhb4hf.mount: Deactivated successfully. Jan 17 01:32:02.358519 systemd[1]: var-lib-kubelet-pods-685e184f\x2db85a\x2d4991\x2db9eb\x2ddc7e37a24e73-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 01:32:02.436235 kubelet[2681]: I0117 01:32:02.436182 2681 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hb4hf\" (UniqueName: \"kubernetes.io/projected/685e184f-b85a-4991-b9eb-dc7e37a24e73-kube-api-access-hb4hf\") on node \"srv-dv3jc.gb1.brightbox.com\" DevicePath \"\"" Jan 17 01:32:02.436430 kubelet[2681]: I0117 01:32:02.436245 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/685e184f-b85a-4991-b9eb-dc7e37a24e73-whisker-ca-bundle\") on node \"srv-dv3jc.gb1.brightbox.com\" DevicePath \"\"" Jan 17 01:32:02.436430 kubelet[2681]: I0117 01:32:02.436267 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/685e184f-b85a-4991-b9eb-dc7e37a24e73-whisker-backend-key-pair\") on node \"srv-dv3jc.gb1.brightbox.com\" DevicePath \"\"" Jan 17 01:32:02.675596 containerd[1507]: time="2026-01-17T01:32:02.674752853Z" level=info msg="StopPodSandbox for \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\"" Jan 17 01:32:02.726076 systemd[1]: Removed slice kubepods-besteffort-pod685e184f_b85a_4991_b9eb_dc7e37a24e73.slice - libcontainer container kubepods-besteffort-pod685e184f_b85a_4991_b9eb_dc7e37a24e73.slice. Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.773 [INFO][3910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.773 [INFO][3910] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" iface="eth0" netns="/var/run/netns/cni-f93cf9e1-c125-c2ef-bf86-3a68820b9814" Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.775 [INFO][3910] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" iface="eth0" netns="/var/run/netns/cni-f93cf9e1-c125-c2ef-bf86-3a68820b9814" Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.780 [INFO][3910] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" iface="eth0" netns="/var/run/netns/cni-f93cf9e1-c125-c2ef-bf86-3a68820b9814" Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.782 [INFO][3910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.782 [INFO][3910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.814 [INFO][3917] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" HandleID="k8s-pod-network.0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.815 [INFO][3917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.815 [INFO][3917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.828 [WARNING][3917] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" HandleID="k8s-pod-network.0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.828 [INFO][3917] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" HandleID="k8s-pod-network.0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.831 [INFO][3917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:02.838145 containerd[1507]: 2026-01-17 01:32:02.836 [INFO][3910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:02.843809 containerd[1507]: time="2026-01-17T01:32:02.840247990Z" level=info msg="TearDown network for sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\" successfully" Jan 17 01:32:02.843809 containerd[1507]: time="2026-01-17T01:32:02.840304294Z" level=info msg="StopPodSandbox for \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\" returns successfully" Jan 17 01:32:02.843809 containerd[1507]: time="2026-01-17T01:32:02.842076549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vdtbx,Uid:a924e366-2268-4f4b-91a1-779a1cb6d303,Namespace:calico-system,Attempt:1,}" Jan 17 01:32:02.841821 systemd[1]: run-netns-cni\x2df93cf9e1\x2dc125\x2dc2ef\x2dbf86\x2d3a68820b9814.mount: Deactivated successfully. Jan 17 01:32:03.134256 systemd-networkd[1418]: cali76895980ec9: Link UP Jan 17 01:32:03.134786 systemd-networkd[1418]: cali76895980ec9: Gained carrier Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:02.932 [INFO][3927] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:02.955 [INFO][3927] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0 goldmane-666569f655- calico-system a924e366-2268-4f4b-91a1-779a1cb6d303 887 0 2026-01-17 01:31:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-dv3jc.gb1.brightbox.com goldmane-666569f655-vdtbx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali76895980ec9 [] [] }} ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Namespace="calico-system" Pod="goldmane-666569f655-vdtbx" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:02.955 [INFO][3927] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Namespace="calico-system" Pod="goldmane-666569f655-vdtbx" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.006 [INFO][3935] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" HandleID="k8s-pod-network.58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.007 [INFO][3935] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" HandleID="k8s-pod-network.58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-dv3jc.gb1.brightbox.com", "pod":"goldmane-666569f655-vdtbx", "timestamp":"2026-01-17 01:32:03.006568205 +0000 UTC"}, Hostname:"srv-dv3jc.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.007 [INFO][3935] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.007 [INFO][3935] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.007 [INFO][3935] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dv3jc.gb1.brightbox.com' Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.023 [INFO][3935] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.039 [INFO][3935] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.055 [INFO][3935] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.059 [INFO][3935] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.063 [INFO][3935] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.064 [INFO][3935] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.068 [INFO][3935] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.099 [INFO][3935] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.108 [INFO][3935] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.129/26] block=192.168.60.128/26 handle="k8s-pod-network.58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.109 [INFO][3935] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.129/26] handle="k8s-pod-network.58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.109 [INFO][3935] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:03.169407 containerd[1507]: 2026-01-17 01:32:03.109 [INFO][3935] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.129/26] IPv6=[] ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" HandleID="k8s-pod-network.58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:03.174895 containerd[1507]: 2026-01-17 01:32:03.112 [INFO][3927] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Namespace="calico-system" Pod="goldmane-666569f655-vdtbx" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a924e366-2268-4f4b-91a1-779a1cb6d303", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-vdtbx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76895980ec9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:03.174895 containerd[1507]: 2026-01-17 01:32:03.112 [INFO][3927] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.129/32] ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Namespace="calico-system" Pod="goldmane-666569f655-vdtbx" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:03.174895 containerd[1507]: 2026-01-17 01:32:03.112 [INFO][3927] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76895980ec9 ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Namespace="calico-system" Pod="goldmane-666569f655-vdtbx" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:03.174895 containerd[1507]: 2026-01-17 01:32:03.136 [INFO][3927] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Namespace="calico-system" Pod="goldmane-666569f655-vdtbx" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:03.174895 containerd[1507]: 2026-01-17 01:32:03.136 [INFO][3927] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Namespace="calico-system" Pod="goldmane-666569f655-vdtbx" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a924e366-2268-4f4b-91a1-779a1cb6d303", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e", Pod:"goldmane-666569f655-vdtbx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76895980ec9", MAC:"ba:78:0a:77:86:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:03.174895 containerd[1507]: 2026-01-17 01:32:03.161 [INFO][3927] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e" Namespace="calico-system" Pod="goldmane-666569f655-vdtbx" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:03.253032 containerd[1507]: time="2026-01-17T01:32:03.252571874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:32:03.253032 containerd[1507]: time="2026-01-17T01:32:03.252705743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:32:03.253032 containerd[1507]: time="2026-01-17T01:32:03.252737119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:03.253032 containerd[1507]: time="2026-01-17T01:32:03.252924291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:03.313374 systemd[1]: Started cri-containerd-58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e.scope - libcontainer container 58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e. Jan 17 01:32:03.335777 systemd[1]: Created slice kubepods-besteffort-pod93088e93_2e87_4fb7_ba1f_ee13328ea623.slice - libcontainer container kubepods-besteffort-pod93088e93_2e87_4fb7_ba1f_ee13328ea623.slice. Jan 17 01:32:03.429595 kubelet[2681]: I0117 01:32:03.429547 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 01:32:03.452220 kubelet[2681]: I0117 01:32:03.451567 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6gm\" (UniqueName: \"kubernetes.io/projected/93088e93-2e87-4fb7-ba1f-ee13328ea623-kube-api-access-zm6gm\") pod \"whisker-9598fc574-mrl8b\" (UID: \"93088e93-2e87-4fb7-ba1f-ee13328ea623\") " pod="calico-system/whisker-9598fc574-mrl8b" Jan 17 01:32:03.452220 kubelet[2681]: I0117 01:32:03.451657 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93088e93-2e87-4fb7-ba1f-ee13328ea623-whisker-ca-bundle\") pod \"whisker-9598fc574-mrl8b\" (UID: \"93088e93-2e87-4fb7-ba1f-ee13328ea623\") " pod="calico-system/whisker-9598fc574-mrl8b" Jan 17 01:32:03.452220 kubelet[2681]: I0117 01:32:03.451767 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/93088e93-2e87-4fb7-ba1f-ee13328ea623-whisker-backend-key-pair\") pod \"whisker-9598fc574-mrl8b\" (UID: \"93088e93-2e87-4fb7-ba1f-ee13328ea623\") " pod="calico-system/whisker-9598fc574-mrl8b" Jan 17 01:32:03.457838 containerd[1507]: time="2026-01-17T01:32:03.457793350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vdtbx,Uid:a924e366-2268-4f4b-91a1-779a1cb6d303,Namespace:calico-system,Attempt:1,} returns sandbox id \"58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e\"" Jan 17 01:32:03.464429 containerd[1507]: time="2026-01-17T01:32:03.463970916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 01:32:03.643973 containerd[1507]: time="2026-01-17T01:32:03.643894885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9598fc574-mrl8b,Uid:93088e93-2e87-4fb7-ba1f-ee13328ea623,Namespace:calico-system,Attempt:0,}" Jan 17 01:32:03.676868 containerd[1507]: time="2026-01-17T01:32:03.676526858Z" level=info msg="StopPodSandbox for \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\"" Jan 17 01:32:03.787723 containerd[1507]: time="2026-01-17T01:32:03.787531946Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:03.826590 containerd[1507]: time="2026-01-17T01:32:03.794632573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 01:32:03.828371 containerd[1507]: time="2026-01-17T01:32:03.802975472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 01:32:03.830006 kubelet[2681]: E0117 01:32:03.829582 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:32:03.830006 kubelet[2681]: E0117 01:32:03.829699 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:32:03.837360 kubelet[2681]: E0117 01:32:03.836886 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6kbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vdtbx_calico-system(a924e366-2268-4f4b-91a1-779a1cb6d303): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:03.839883 kubelet[2681]: E0117 01:32:03.838390 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:32:03.850407 systemd[1]: run-containerd-runc-k8s.io-58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e-runc.NUomAX.mount: Deactivated successfully. Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.859 [INFO][4103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.860 [INFO][4103] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" iface="eth0" netns="/var/run/netns/cni-9913c806-4d57-c43c-b409-d97896331f5f" Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.861 [INFO][4103] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" iface="eth0" netns="/var/run/netns/cni-9913c806-4d57-c43c-b409-d97896331f5f" Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.862 [INFO][4103] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" iface="eth0" netns="/var/run/netns/cni-9913c806-4d57-c43c-b409-d97896331f5f" Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.862 [INFO][4103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.862 [INFO][4103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.981 [INFO][4122] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" HandleID="k8s-pod-network.a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.983 [INFO][4122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.983 [INFO][4122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.998 [WARNING][4122] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" HandleID="k8s-pod-network.a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:03.998 [INFO][4122] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" HandleID="k8s-pod-network.a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:04.000 [INFO][4122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:04.007917 containerd[1507]: 2026-01-17 01:32:04.004 [INFO][4103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:04.012161 containerd[1507]: time="2026-01-17T01:32:04.010220757Z" level=info msg="TearDown network for sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\" successfully" Jan 17 01:32:04.012161 containerd[1507]: time="2026-01-17T01:32:04.010265860Z" level=info msg="StopPodSandbox for \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\" returns successfully" Jan 17 01:32:04.014750 containerd[1507]: time="2026-01-17T01:32:04.013790947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-crszt,Uid:c16de311-2d09-4fff-8444-304a8ff3b2b5,Namespace:calico-system,Attempt:1,}" Jan 17 01:32:04.013377 systemd[1]: run-netns-cni\x2d9913c806\x2d4d57\x2dc43c\x2db409\x2dd97896331f5f.mount: Deactivated successfully. Jan 17 01:32:04.134602 systemd-networkd[1418]: cali64b7da44921: Link UP Jan 17 01:32:04.139003 systemd-networkd[1418]: cali64b7da44921: Gained carrier Jan 17 01:32:04.178983 kubelet[2681]: E0117 01:32:04.178452 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:03.780 [INFO][4082] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:03.834 [INFO][4082] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0 whisker-9598fc574- calico-system 93088e93-2e87-4fb7-ba1f-ee13328ea623 905 0 2026-01-17 01:32:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9598fc574 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-dv3jc.gb1.brightbox.com whisker-9598fc574-mrl8b eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali64b7da44921 [] [] }} ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Namespace="calico-system" Pod="whisker-9598fc574-mrl8b" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:03.835 [INFO][4082] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Namespace="calico-system" Pod="whisker-9598fc574-mrl8b" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:03.988 [INFO][4127] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" HandleID="k8s-pod-network.bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:03.989 [INFO][4127] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" HandleID="k8s-pod-network.bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f92f0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-dv3jc.gb1.brightbox.com", "pod":"whisker-9598fc574-mrl8b", "timestamp":"2026-01-17 01:32:03.988226128 +0000 UTC"}, Hostname:"srv-dv3jc.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:03.989 [INFO][4127] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.000 [INFO][4127] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.000 [INFO][4127] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dv3jc.gb1.brightbox.com' Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.025 [INFO][4127] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.046 [INFO][4127] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.075 [INFO][4127] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.080 [INFO][4127] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.085 [INFO][4127] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.086 [INFO][4127] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.091 [INFO][4127] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48 Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.099 [INFO][4127] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.122 [INFO][4127] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.130/26] block=192.168.60.128/26 handle="k8s-pod-network.bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.122 [INFO][4127] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.130/26] handle="k8s-pod-network.bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.123 [INFO][4127] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:04.179933 containerd[1507]: 2026-01-17 01:32:04.123 [INFO][4127] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.130/26] IPv6=[] ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" HandleID="k8s-pod-network.bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0" Jan 17 01:32:04.183604 containerd[1507]: 2026-01-17 01:32:04.128 [INFO][4082] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Namespace="calico-system" Pod="whisker-9598fc574-mrl8b" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0", GenerateName:"whisker-9598fc574-", Namespace:"calico-system", SelfLink:"", UID:"93088e93-2e87-4fb7-ba1f-ee13328ea623", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 32, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9598fc574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"", Pod:"whisker-9598fc574-mrl8b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali64b7da44921", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:04.183604 containerd[1507]: 2026-01-17 01:32:04.129 [INFO][4082] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.130/32] ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Namespace="calico-system" Pod="whisker-9598fc574-mrl8b" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0" Jan 17 01:32:04.183604 containerd[1507]: 2026-01-17 01:32:04.129 [INFO][4082] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64b7da44921 ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Namespace="calico-system" Pod="whisker-9598fc574-mrl8b" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0" Jan 17 01:32:04.183604 containerd[1507]: 2026-01-17 01:32:04.137 [INFO][4082] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Namespace="calico-system" Pod="whisker-9598fc574-mrl8b" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0" Jan 17 01:32:04.183604 containerd[1507]: 2026-01-17 01:32:04.140 [INFO][4082] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Namespace="calico-system" Pod="whisker-9598fc574-mrl8b" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0", GenerateName:"whisker-9598fc574-", Namespace:"calico-system", SelfLink:"", UID:"93088e93-2e87-4fb7-ba1f-ee13328ea623", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 32, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9598fc574", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48", Pod:"whisker-9598fc574-mrl8b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali64b7da44921", MAC:"b2:b7:87:73:74:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:04.183604 containerd[1507]: 2026-01-17 01:32:04.162 [INFO][4082] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48" Namespace="calico-system" Pod="whisker-9598fc574-mrl8b" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-whisker--9598fc574--mrl8b-eth0" Jan 17 01:32:04.264172 containerd[1507]: time="2026-01-17T01:32:04.263660541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:32:04.264172 containerd[1507]: time="2026-01-17T01:32:04.264068289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:32:04.273142 containerd[1507]: time="2026-01-17T01:32:04.264105780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:04.273542 containerd[1507]: time="2026-01-17T01:32:04.273310439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:04.363309 systemd[1]: Started cri-containerd-bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48.scope - libcontainer container bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48. Jan 17 01:32:04.468027 systemd-networkd[1418]: caliad4502ce9ba: Link UP Jan 17 01:32:04.471208 systemd-networkd[1418]: caliad4502ce9ba: Gained carrier Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.172 [INFO][4138] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.227 [INFO][4138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0 csi-node-driver- calico-system c16de311-2d09-4fff-8444-304a8ff3b2b5 911 0 2026-01-17 01:31:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-dv3jc.gb1.brightbox.com csi-node-driver-crszt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliad4502ce9ba [] [] }} ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Namespace="calico-system" Pod="csi-node-driver-crszt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.228 [INFO][4138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Namespace="calico-system" Pod="csi-node-driver-crszt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.361 [INFO][4174] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" HandleID="k8s-pod-network.6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.367 [INFO][4174] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" HandleID="k8s-pod-network.6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000124c50), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-dv3jc.gb1.brightbox.com", "pod":"csi-node-driver-crszt", "timestamp":"2026-01-17 01:32:04.361654333 +0000 UTC"}, Hostname:"srv-dv3jc.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.368 [INFO][4174] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.368 [INFO][4174] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.368 [INFO][4174] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dv3jc.gb1.brightbox.com' Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.385 [INFO][4174] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.402 [INFO][4174] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.414 [INFO][4174] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.419 [INFO][4174] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.424 [INFO][4174] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.424 [INFO][4174] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.428 [INFO][4174] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513 Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.437 [INFO][4174] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.447 [INFO][4174] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.131/26] block=192.168.60.128/26 handle="k8s-pod-network.6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.447 [INFO][4174] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.131/26] handle="k8s-pod-network.6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.447 [INFO][4174] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:04.514360 containerd[1507]: 2026-01-17 01:32:04.447 [INFO][4174] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.131/26] IPv6=[] ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" HandleID="k8s-pod-network.6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:04.517892 containerd[1507]: 2026-01-17 01:32:04.453 [INFO][4138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Namespace="calico-system" Pod="csi-node-driver-crszt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c16de311-2d09-4fff-8444-304a8ff3b2b5", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-crszt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad4502ce9ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:04.517892 containerd[1507]: 2026-01-17 01:32:04.453 [INFO][4138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.131/32] ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Namespace="calico-system" Pod="csi-node-driver-crszt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:04.517892 containerd[1507]: 2026-01-17 01:32:04.454 [INFO][4138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad4502ce9ba ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Namespace="calico-system" Pod="csi-node-driver-crszt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:04.517892 containerd[1507]: 2026-01-17 01:32:04.474 [INFO][4138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Namespace="calico-system" Pod="csi-node-driver-crszt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:04.517892 containerd[1507]: 2026-01-17 01:32:04.475 [INFO][4138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Namespace="calico-system" Pod="csi-node-driver-crszt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c16de311-2d09-4fff-8444-304a8ff3b2b5", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513", Pod:"csi-node-driver-crszt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad4502ce9ba", MAC:"96:aa:a2:df:30:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:04.517892 containerd[1507]: 2026-01-17 01:32:04.495 [INFO][4138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513" Namespace="calico-system" Pod="csi-node-driver-crszt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:04.564592 containerd[1507]: time="2026-01-17T01:32:04.563745760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9598fc574-mrl8b,Uid:93088e93-2e87-4fb7-ba1f-ee13328ea623,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd5615122d18fb850e5eaf799374fb9a5f6edbef2b9eedfdc36e6803c9ad9d48\"" Jan 17 01:32:04.568138 containerd[1507]: time="2026-01-17T01:32:04.567881273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 01:32:04.577969 systemd-networkd[1418]: cali76895980ec9: Gained IPv6LL Jan 17 01:32:04.588239 containerd[1507]: time="2026-01-17T01:32:04.587070430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:32:04.588239 containerd[1507]: time="2026-01-17T01:32:04.587612345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:32:04.588239 containerd[1507]: time="2026-01-17T01:32:04.587644218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:04.588239 containerd[1507]: time="2026-01-17T01:32:04.587778410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:04.631569 systemd[1]: Started cri-containerd-6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513.scope - libcontainer container 6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513. Jan 17 01:32:04.691145 containerd[1507]: time="2026-01-17T01:32:04.689554148Z" level=info msg="StopPodSandbox for \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\"" Jan 17 01:32:04.699680 kubelet[2681]: I0117 01:32:04.699599 2681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="685e184f-b85a-4991-b9eb-dc7e37a24e73" path="/var/lib/kubelet/pods/685e184f-b85a-4991-b9eb-dc7e37a24e73/volumes" Jan 17 01:32:04.773671 containerd[1507]: time="2026-01-17T01:32:04.771432503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-crszt,Uid:c16de311-2d09-4fff-8444-304a8ff3b2b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513\"" Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.840 [INFO][4262] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.840 [INFO][4262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" iface="eth0" netns="/var/run/netns/cni-579d70c8-1f04-c8b6-8816-fe52d31e728a" Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.840 [INFO][4262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" iface="eth0" netns="/var/run/netns/cni-579d70c8-1f04-c8b6-8816-fe52d31e728a" Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.842 [INFO][4262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" iface="eth0" netns="/var/run/netns/cni-579d70c8-1f04-c8b6-8816-fe52d31e728a" Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.842 [INFO][4262] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.842 [INFO][4262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.917 [INFO][4276] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" HandleID="k8s-pod-network.ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.920 [INFO][4276] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.920 [INFO][4276] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.932 [WARNING][4276] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" HandleID="k8s-pod-network.ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.933 [INFO][4276] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" HandleID="k8s-pod-network.ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.938 [INFO][4276] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:04.952145 containerd[1507]: 2026-01-17 01:32:04.946 [INFO][4262] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:04.954933 containerd[1507]: time="2026-01-17T01:32:04.952497389Z" level=info msg="TearDown network for sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\" successfully" Jan 17 01:32:04.954933 containerd[1507]: time="2026-01-17T01:32:04.952531457Z" level=info msg="StopPodSandbox for \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\" returns successfully" Jan 17 01:32:04.954933 containerd[1507]: time="2026-01-17T01:32:04.953416033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j2ntp,Uid:4a82681c-7367-4e06-9a33-de4eeb86a08d,Namespace:kube-system,Attempt:1,}" Jan 17 01:32:04.957510 systemd[1]: run-netns-cni\x2d579d70c8\x2d1f04\x2dc8b6\x2d8816\x2dfe52d31e728a.mount: Deactivated successfully. Jan 17 01:32:04.971851 containerd[1507]: time="2026-01-17T01:32:04.971454237Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:04.973730 containerd[1507]: time="2026-01-17T01:32:04.973263220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 01:32:04.973730 containerd[1507]: time="2026-01-17T01:32:04.973604954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 01:32:04.974963 kubelet[2681]: E0117 01:32:04.974226 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:32:04.974963 kubelet[2681]: E0117 01:32:04.974298 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:32:04.974963 kubelet[2681]: E0117 01:32:04.974560 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:184225ead8aa4834ae5f2781753a20a4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm6gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9598fc574-mrl8b_calico-system(93088e93-2e87-4fb7-ba1f-ee13328ea623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:04.975592 containerd[1507]: time="2026-01-17T01:32:04.975452919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 01:32:05.189655 kubelet[2681]: E0117 01:32:05.189530 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:32:05.321321 systemd-networkd[1418]: cali010329f8a72: Link UP Jan 17 01:32:05.323468 systemd-networkd[1418]: cali010329f8a72: Gained carrier Jan 17 01:32:05.344582 containerd[1507]: time="2026-01-17T01:32:05.344382002Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.064 [INFO][4283] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.093 [INFO][4283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0 coredns-668d6bf9bc- kube-system 4a82681c-7367-4e06-9a33-de4eeb86a08d 929 0 2026-01-17 01:31:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-dv3jc.gb1.brightbox.com coredns-668d6bf9bc-j2ntp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali010329f8a72 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-j2ntp" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.093 [INFO][4283] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-j2ntp" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.241 [INFO][4315] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" HandleID="k8s-pod-network.229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.244 [INFO][4315] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" HandleID="k8s-pod-network.229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f920), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-dv3jc.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-j2ntp", "timestamp":"2026-01-17 01:32:05.24180843 +0000 UTC"}, Hostname:"srv-dv3jc.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.244 [INFO][4315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.244 [INFO][4315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.244 [INFO][4315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dv3jc.gb1.brightbox.com' Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.262 [INFO][4315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.271 [INFO][4315] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.278 [INFO][4315] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.281 [INFO][4315] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.285 [INFO][4315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.285 [INFO][4315] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.290 [INFO][4315] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2 Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.301 [INFO][4315] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.310 [INFO][4315] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.132/26] block=192.168.60.128/26 handle="k8s-pod-network.229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.310 [INFO][4315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.132/26] handle="k8s-pod-network.229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.310 [INFO][4315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:05.348538 containerd[1507]: 2026-01-17 01:32:05.311 [INFO][4315] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.132/26] IPv6=[] ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" HandleID="k8s-pod-network.229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:05.353668 containerd[1507]: 2026-01-17 01:32:05.315 [INFO][4283] cni-plugin/k8s.go 418: Populated endpoint ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-j2ntp" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a82681c-7367-4e06-9a33-de4eeb86a08d", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-j2ntp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali010329f8a72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:05.353668 containerd[1507]: 2026-01-17 01:32:05.315 [INFO][4283] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.132/32] ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-j2ntp" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:05.353668 containerd[1507]: 2026-01-17 01:32:05.315 [INFO][4283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali010329f8a72 ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-j2ntp" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:05.353668 containerd[1507]: 2026-01-17 01:32:05.327 [INFO][4283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-j2ntp" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:05.353668 containerd[1507]: 2026-01-17 01:32:05.327 [INFO][4283] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-j2ntp" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a82681c-7367-4e06-9a33-de4eeb86a08d", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2", Pod:"coredns-668d6bf9bc-j2ntp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali010329f8a72", MAC:"3e:2a:64:45:9e:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:05.353668 containerd[1507]: 2026-01-17 01:32:05.342 [INFO][4283] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-j2ntp" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:05.353668 containerd[1507]: time="2026-01-17T01:32:05.350877885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 01:32:05.353668 containerd[1507]: time="2026-01-17T01:32:05.350977675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 01:32:05.355029 kubelet[2681]: E0117 01:32:05.351873 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:32:05.355029 kubelet[2681]: E0117 01:32:05.351930 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:32:05.355029 kubelet[2681]: E0117 01:32:05.354017 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfbnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-crszt_calico-system(c16de311-2d09-4fff-8444-304a8ff3b2b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:05.357152 containerd[1507]: time="2026-01-17T01:32:05.354436782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 01:32:05.407534 containerd[1507]: time="2026-01-17T01:32:05.402685620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:32:05.407534 containerd[1507]: time="2026-01-17T01:32:05.402780380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:32:05.407534 containerd[1507]: time="2026-01-17T01:32:05.402803070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:05.407534 containerd[1507]: time="2026-01-17T01:32:05.402938389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:05.446422 systemd[1]: Started cri-containerd-229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2.scope - libcontainer container 229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2. Jan 17 01:32:05.536539 systemd-networkd[1418]: caliad4502ce9ba: Gained IPv6LL Jan 17 01:32:05.537402 kernel: bpftool[4395]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 01:32:05.593598 containerd[1507]: time="2026-01-17T01:32:05.593471931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j2ntp,Uid:4a82681c-7367-4e06-9a33-de4eeb86a08d,Namespace:kube-system,Attempt:1,} returns sandbox id \"229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2\"" Jan 17 01:32:05.600855 containerd[1507]: time="2026-01-17T01:32:05.600797250Z" level=info msg="CreateContainer within sandbox \"229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 01:32:05.644039 containerd[1507]: time="2026-01-17T01:32:05.643854191Z" level=info msg="CreateContainer within sandbox \"229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73a20077e699b921305b1b0c6fb047e05e03e097deefbb0493370d5283b88bd0\"" Jan 17 01:32:05.645555 containerd[1507]: time="2026-01-17T01:32:05.645514005Z" level=info msg="StartContainer for \"73a20077e699b921305b1b0c6fb047e05e03e097deefbb0493370d5283b88bd0\"" Jan 17 01:32:05.662486 systemd-networkd[1418]: cali64b7da44921: Gained IPv6LL Jan 17 01:32:05.675904 containerd[1507]: time="2026-01-17T01:32:05.675420126Z" level=info msg="StopPodSandbox for \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\"" Jan 17 01:32:05.676289 containerd[1507]: time="2026-01-17T01:32:05.676256746Z" level=info msg="StopPodSandbox for \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\"" Jan 17 01:32:05.678405 containerd[1507]: time="2026-01-17T01:32:05.677972705Z" level=info msg="StopPodSandbox for \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\"" Jan 17 01:32:05.708528 containerd[1507]: time="2026-01-17T01:32:05.708360537Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:05.712165 containerd[1507]: time="2026-01-17T01:32:05.711173874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 01:32:05.712336 containerd[1507]: time="2026-01-17T01:32:05.712238385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 01:32:05.712760 kubelet[2681]: E0117 01:32:05.712699 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:32:05.714233 kubelet[2681]: E0117 01:32:05.712775 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:32:05.714233 kubelet[2681]: E0117 01:32:05.713226 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm6gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9598fc574-mrl8b_calico-system(93088e93-2e87-4fb7-ba1f-ee13328ea623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:05.716154 kubelet[2681]: E0117 01:32:05.714759 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9598fc574-mrl8b" podUID="93088e93-2e87-4fb7-ba1f-ee13328ea623" Jan 17 01:32:05.716252 containerd[1507]: time="2026-01-17T01:32:05.714775035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 01:32:05.761657 systemd[1]: Started cri-containerd-73a20077e699b921305b1b0c6fb047e05e03e097deefbb0493370d5283b88bd0.scope - libcontainer container 73a20077e699b921305b1b0c6fb047e05e03e097deefbb0493370d5283b88bd0. Jan 17 01:32:05.848740 systemd[1]: run-containerd-runc-k8s.io-a039439259c9b76ab22fbccf8671bf924eb5c34a31000b8a951a5473fd269cb0-runc.HpXonN.mount: Deactivated successfully. Jan 17 01:32:05.940180 containerd[1507]: time="2026-01-17T01:32:05.939940782Z" level=info msg="StartContainer for \"73a20077e699b921305b1b0c6fb047e05e03e097deefbb0493370d5283b88bd0\" returns successfully" Jan 17 01:32:06.075936 containerd[1507]: time="2026-01-17T01:32:06.075602470Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:06.079828 containerd[1507]: time="2026-01-17T01:32:06.079773551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 01:32:06.080426 containerd[1507]: time="2026-01-17T01:32:06.079960496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 01:32:06.081298 kubelet[2681]: E0117 01:32:06.080854 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:32:06.081298 kubelet[2681]: E0117 01:32:06.080955 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:32:06.082958 kubelet[2681]: E0117 01:32:06.082849 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfbnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-crszt_calico-system(c16de311-2d09-4fff-8444-304a8ff3b2b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:06.084924 kubelet[2681]: E0117 01:32:06.084839 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:05.970 [INFO][4457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:05.971 [INFO][4457] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" iface="eth0" netns="/var/run/netns/cni-28426517-b134-9a69-bc89-e3c7b54c1732" Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:05.973 [INFO][4457] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" iface="eth0" netns="/var/run/netns/cni-28426517-b134-9a69-bc89-e3c7b54c1732" Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:05.977 [INFO][4457] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" iface="eth0" netns="/var/run/netns/cni-28426517-b134-9a69-bc89-e3c7b54c1732" Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:05.977 [INFO][4457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:05.977 [INFO][4457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:06.047 [INFO][4487] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" HandleID="k8s-pod-network.df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:06.048 [INFO][4487] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:06.048 [INFO][4487] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:06.072 [WARNING][4487] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" HandleID="k8s-pod-network.df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:06.072 [INFO][4487] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" HandleID="k8s-pod-network.df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:06.075 [INFO][4487] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:06.090344 containerd[1507]: 2026-01-17 01:32:06.083 [INFO][4457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:06.095066 containerd[1507]: time="2026-01-17T01:32:06.093227067Z" level=info msg="TearDown network for sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\" successfully" Jan 17 01:32:06.095066 containerd[1507]: time="2026-01-17T01:32:06.093473728Z" level=info msg="StopPodSandbox for \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\" returns successfully" Jan 17 01:32:06.095872 containerd[1507]: time="2026-01-17T01:32:06.095833546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594455f78-c92j5,Uid:307224a1-2fa9-44a1-ad77-684cc2300054,Namespace:calico-apiserver,Attempt:1,}" Jan 17 01:32:06.096688 systemd[1]: run-netns-cni\x2d28426517\x2db134\x2d9a69\x2dbc89\x2de3c7b54c1732.mount: Deactivated successfully. Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:05.996 [INFO][4445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:05.997 [INFO][4445] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" iface="eth0" netns="/var/run/netns/cni-d7042ee9-cb99-e9e7-7202-1e6c3c8c6cbd" Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:05.998 [INFO][4445] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" iface="eth0" netns="/var/run/netns/cni-d7042ee9-cb99-e9e7-7202-1e6c3c8c6cbd" Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:06.000 [INFO][4445] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" iface="eth0" netns="/var/run/netns/cni-d7042ee9-cb99-e9e7-7202-1e6c3c8c6cbd" Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:06.001 [INFO][4445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:06.001 [INFO][4445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:06.130 [INFO][4498] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" HandleID="k8s-pod-network.133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:06.130 [INFO][4498] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:06.131 [INFO][4498] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:06.153 [WARNING][4498] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" HandleID="k8s-pod-network.133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:06.153 [INFO][4498] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" HandleID="k8s-pod-network.133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:06.156 [INFO][4498] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:06.198149 containerd[1507]: 2026-01-17 01:32:06.183 [INFO][4445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:05.972 [INFO][4444] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:05.972 [INFO][4444] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" iface="eth0" netns="/var/run/netns/cni-70b59e0a-d0fc-0c08-8a8e-2ac357650b6a" Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:05.973 [INFO][4444] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" iface="eth0" netns="/var/run/netns/cni-70b59e0a-d0fc-0c08-8a8e-2ac357650b6a" Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:05.976 [INFO][4444] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" iface="eth0" netns="/var/run/netns/cni-70b59e0a-d0fc-0c08-8a8e-2ac357650b6a" Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:05.976 [INFO][4444] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:05.977 [INFO][4444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:06.154 [INFO][4485] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" HandleID="k8s-pod-network.b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:06.155 [INFO][4485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:06.156 [INFO][4485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:06.180 [WARNING][4485] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" HandleID="k8s-pod-network.b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:06.180 [INFO][4485] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" HandleID="k8s-pod-network.b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:06.187 [INFO][4485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:06.204137 containerd[1507]: 2026-01-17 01:32:06.190 [INFO][4444] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:06.207755 containerd[1507]: time="2026-01-17T01:32:06.207430645Z" level=info msg="TearDown network for sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\" successfully" Jan 17 01:32:06.207755 containerd[1507]: time="2026-01-17T01:32:06.207469272Z" level=info msg="StopPodSandbox for \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\" returns successfully" Jan 17 01:32:06.206036 systemd[1]: run-netns-cni\x2dd7042ee9\x2dcb99\x2de9e7\x2d7202\x2d1e6c3c8c6cbd.mount: Deactivated successfully. Jan 17 01:32:06.208863 containerd[1507]: time="2026-01-17T01:32:06.208539621Z" level=info msg="TearDown network for sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\" successfully" Jan 17 01:32:06.208863 containerd[1507]: time="2026-01-17T01:32:06.208571793Z" level=info msg="StopPodSandbox for \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\" returns successfully" Jan 17 01:32:06.209559 containerd[1507]: time="2026-01-17T01:32:06.209351609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7d5775b-t6c4g,Uid:50610d67-f39f-4c35-8e6d-c6596bfafc13,Namespace:calico-system,Attempt:1,}" Jan 17 01:32:06.211382 containerd[1507]: time="2026-01-17T01:32:06.211195668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594455f78-9rjtt,Uid:166a9b11-96f4-43ec-a822-357f748e3c20,Namespace:calico-apiserver,Attempt:1,}" Jan 17 01:32:06.225291 kubelet[2681]: E0117 01:32:06.225199 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9598fc574-mrl8b" podUID="93088e93-2e87-4fb7-ba1f-ee13328ea623" Jan 17 01:32:06.225910 kubelet[2681]: E0117 01:32:06.225834 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:32:06.266238 kubelet[2681]: I0117 01:32:06.265967 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j2ntp" podStartSLOduration=49.265922075 podStartE2EDuration="49.265922075s" podCreationTimestamp="2026-01-17 01:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:32:06.259099482 +0000 UTC m=+53.847216693" watchObservedRunningTime="2026-01-17 01:32:06.265922075 +0000 UTC m=+53.854039279" Jan 17 01:32:06.493771 systemd-networkd[1418]: cali010329f8a72: Gained IPv6LL Jan 17 01:32:06.687417 systemd-networkd[1418]: cali5060708eabe: Link UP Jan 17 01:32:06.693260 systemd-networkd[1418]: cali5060708eabe: Gained carrier Jan 17 01:32:06.716154 containerd[1507]: time="2026-01-17T01:32:06.716089226Z" level=info msg="StopPodSandbox for \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\"" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.435 [INFO][4519] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0 calico-kube-controllers-c7d5775b- calico-system 50610d67-f39f-4c35-8e6d-c6596bfafc13 955 0 2026-01-17 01:31:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c7d5775b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-dv3jc.gb1.brightbox.com calico-kube-controllers-c7d5775b-t6c4g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5060708eabe [] [] }} ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Namespace="calico-system" Pod="calico-kube-controllers-c7d5775b-t6c4g" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.436 [INFO][4519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Namespace="calico-system" Pod="calico-kube-controllers-c7d5775b-t6c4g" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.572 [INFO][4559] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" HandleID="k8s-pod-network.98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.575 [INFO][4559] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" HandleID="k8s-pod-network.98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033cc80), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-dv3jc.gb1.brightbox.com", "pod":"calico-kube-controllers-c7d5775b-t6c4g", "timestamp":"2026-01-17 01:32:06.57288422 +0000 UTC"}, Hostname:"srv-dv3jc.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.575 [INFO][4559] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.575 [INFO][4559] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.575 [INFO][4559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dv3jc.gb1.brightbox.com' Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.594 [INFO][4559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.608 [INFO][4559] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.617 [INFO][4559] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.620 [INFO][4559] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.626 [INFO][4559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.626 [INFO][4559] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.630 [INFO][4559] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2 Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.644 [INFO][4559] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.655 [INFO][4559] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.133/26] block=192.168.60.128/26 handle="k8s-pod-network.98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.656 [INFO][4559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.133/26] handle="k8s-pod-network.98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.656 [INFO][4559] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:06.741954 containerd[1507]: 2026-01-17 01:32:06.656 [INFO][4559] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.133/26] IPv6=[] ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" HandleID="k8s-pod-network.98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:06.743080 containerd[1507]: 2026-01-17 01:32:06.663 [INFO][4519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Namespace="calico-system" Pod="calico-kube-controllers-c7d5775b-t6c4g" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0", GenerateName:"calico-kube-controllers-c7d5775b-", Namespace:"calico-system", SelfLink:"", UID:"50610d67-f39f-4c35-8e6d-c6596bfafc13", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7d5775b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-c7d5775b-t6c4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5060708eabe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:06.743080 containerd[1507]: 2026-01-17 01:32:06.664 [INFO][4519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.133/32] ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Namespace="calico-system" Pod="calico-kube-controllers-c7d5775b-t6c4g" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:06.743080 containerd[1507]: 2026-01-17 01:32:06.664 [INFO][4519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5060708eabe ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Namespace="calico-system" Pod="calico-kube-controllers-c7d5775b-t6c4g" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:06.743080 containerd[1507]: 2026-01-17 01:32:06.701 [INFO][4519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Namespace="calico-system" Pod="calico-kube-controllers-c7d5775b-t6c4g" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:06.743080 containerd[1507]: 2026-01-17 01:32:06.704 [INFO][4519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Namespace="calico-system" Pod="calico-kube-controllers-c7d5775b-t6c4g" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0", GenerateName:"calico-kube-controllers-c7d5775b-", Namespace:"calico-system", SelfLink:"", UID:"50610d67-f39f-4c35-8e6d-c6596bfafc13", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7d5775b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2", Pod:"calico-kube-controllers-c7d5775b-t6c4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5060708eabe", MAC:"36:82:d5:ab:70:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:06.743080 containerd[1507]: 2026-01-17 01:32:06.735 [INFO][4519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2" Namespace="calico-system" Pod="calico-kube-controllers-c7d5775b-t6c4g" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:06.822446 containerd[1507]: time="2026-01-17T01:32:06.820379109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:32:06.822446 containerd[1507]: time="2026-01-17T01:32:06.820496042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:32:06.822446 containerd[1507]: time="2026-01-17T01:32:06.820539108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:06.822446 containerd[1507]: time="2026-01-17T01:32:06.820744304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:06.852876 systemd[1]: run-netns-cni\x2d70b59e0a\x2dd0fc\x2d0c08\x2d8a8e\x2d2ac357650b6a.mount: Deactivated successfully. Jan 17 01:32:06.884523 systemd[1]: Started cri-containerd-98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2.scope - libcontainer container 98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2. Jan 17 01:32:06.905621 systemd-networkd[1418]: cali25c2ac32e38: Link UP Jan 17 01:32:06.909129 systemd-networkd[1418]: cali25c2ac32e38: Gained carrier Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.415 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0 calico-apiserver-6594455f78- calico-apiserver 307224a1-2fa9-44a1-ad77-684cc2300054 954 0 2026-01-17 01:31:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6594455f78 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-dv3jc.gb1.brightbox.com calico-apiserver-6594455f78-c92j5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali25c2ac32e38 [] [] }} ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-c92j5" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.419 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-c92j5" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.572 [INFO][4553] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" HandleID="k8s-pod-network.ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.577 [INFO][4553] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" HandleID="k8s-pod-network.ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d8840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-dv3jc.gb1.brightbox.com", "pod":"calico-apiserver-6594455f78-c92j5", "timestamp":"2026-01-17 01:32:06.572798656 +0000 UTC"}, Hostname:"srv-dv3jc.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.577 [INFO][4553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.656 [INFO][4553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.656 [INFO][4553] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dv3jc.gb1.brightbox.com' Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.695 [INFO][4553] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.709 [INFO][4553] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.722 [INFO][4553] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.731 [INFO][4553] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.751 [INFO][4553] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.752 [INFO][4553] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.758 [INFO][4553] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.831 [INFO][4553] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.887 [INFO][4553] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.134/26] block=192.168.60.128/26 handle="k8s-pod-network.ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.888 [INFO][4553] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.134/26] handle="k8s-pod-network.ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.889 [INFO][4553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:06.968634 containerd[1507]: 2026-01-17 01:32:06.889 [INFO][4553] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.134/26] IPv6=[] ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" HandleID="k8s-pod-network.ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:06.970462 containerd[1507]: 2026-01-17 01:32:06.896 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-c92j5" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0", GenerateName:"calico-apiserver-6594455f78-", Namespace:"calico-apiserver", SelfLink:"", UID:"307224a1-2fa9-44a1-ad77-684cc2300054", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594455f78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6594455f78-c92j5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25c2ac32e38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:06.970462 containerd[1507]: 2026-01-17 01:32:06.896 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.134/32] ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-c92j5" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:06.970462 containerd[1507]: 2026-01-17 01:32:06.896 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25c2ac32e38 ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-c92j5" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:06.970462 containerd[1507]: 2026-01-17 01:32:06.911 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-c92j5" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:06.970462 containerd[1507]: 2026-01-17 01:32:06.913 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-c92j5" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0", GenerateName:"calico-apiserver-6594455f78-", Namespace:"calico-apiserver", SelfLink:"", UID:"307224a1-2fa9-44a1-ad77-684cc2300054", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594455f78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea", Pod:"calico-apiserver-6594455f78-c92j5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25c2ac32e38", MAC:"9a:97:36:73:01:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:06.970462 containerd[1507]: 2026-01-17 01:32:06.961 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-c92j5" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:07.061909 containerd[1507]: time="2026-01-17T01:32:07.060883870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:32:07.061909 containerd[1507]: time="2026-01-17T01:32:07.061099024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:32:07.063730 containerd[1507]: time="2026-01-17T01:32:07.063418624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:07.063730 containerd[1507]: time="2026-01-17T01:32:07.063564300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:07.106337 systemd-networkd[1418]: calidc29762e9c3: Link UP Jan 17 01:32:07.111288 systemd-networkd[1418]: calidc29762e9c3: Gained carrier Jan 17 01:32:07.112734 systemd[1]: Started cri-containerd-ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea.scope - libcontainer container ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea. Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:06.505 [INFO][4532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0 calico-apiserver-6594455f78- calico-apiserver 166a9b11-96f4-43ec-a822-357f748e3c20 956 0 2026-01-17 01:31:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6594455f78 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-dv3jc.gb1.brightbox.com calico-apiserver-6594455f78-9rjtt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidc29762e9c3 [] [] }} ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-9rjtt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:06.505 [INFO][4532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-9rjtt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:06.625 [INFO][4564] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" HandleID="k8s-pod-network.8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:06.631 [INFO][4564] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" HandleID="k8s-pod-network.8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-dv3jc.gb1.brightbox.com", "pod":"calico-apiserver-6594455f78-9rjtt", "timestamp":"2026-01-17 01:32:06.625611306 +0000 UTC"}, Hostname:"srv-dv3jc.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:06.637 [INFO][4564] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:06.889 [INFO][4564] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:06.890 [INFO][4564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dv3jc.gb1.brightbox.com' Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:06.954 [INFO][4564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:06.978 [INFO][4564] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:07.017 [INFO][4564] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:07.022 [INFO][4564] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:07.031 [INFO][4564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:07.031 [INFO][4564] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:07.034 [INFO][4564] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:07.057 [INFO][4564] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:07.086 [INFO][4564] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.135/26] block=192.168.60.128/26 handle="k8s-pod-network.8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:07.086 [INFO][4564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.135/26] handle="k8s-pod-network.8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:07.086 [INFO][4564] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:07.191680 containerd[1507]: 2026-01-17 01:32:07.086 [INFO][4564] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.135/26] IPv6=[] ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" HandleID="k8s-pod-network.8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:07.192834 containerd[1507]: 2026-01-17 01:32:07.096 [INFO][4532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-9rjtt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0", GenerateName:"calico-apiserver-6594455f78-", Namespace:"calico-apiserver", SelfLink:"", UID:"166a9b11-96f4-43ec-a822-357f748e3c20", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594455f78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6594455f78-9rjtt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc29762e9c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:07.192834 containerd[1507]: 2026-01-17 01:32:07.098 [INFO][4532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.135/32] ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-9rjtt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:07.192834 containerd[1507]: 2026-01-17 01:32:07.098 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc29762e9c3 ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-9rjtt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:07.192834 containerd[1507]: 2026-01-17 01:32:07.117 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-9rjtt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:07.192834 containerd[1507]: 2026-01-17 01:32:07.119 [INFO][4532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-9rjtt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0", GenerateName:"calico-apiserver-6594455f78-", Namespace:"calico-apiserver", SelfLink:"", UID:"166a9b11-96f4-43ec-a822-357f748e3c20", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594455f78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f", Pod:"calico-apiserver-6594455f78-9rjtt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc29762e9c3", MAC:"0a:b9:6e:73:3b:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:07.192834 containerd[1507]: 2026-01-17 01:32:07.180 [INFO][4532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f" Namespace="calico-apiserver" Pod="calico-apiserver-6594455f78-9rjtt" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:06.955 [INFO][4588] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:06.955 [INFO][4588] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" iface="eth0" netns="/var/run/netns/cni-c7043f39-beee-aa7f-9ea1-f97564ddbf33" Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:06.956 [INFO][4588] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" iface="eth0" netns="/var/run/netns/cni-c7043f39-beee-aa7f-9ea1-f97564ddbf33" Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:06.956 [INFO][4588] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" iface="eth0" netns="/var/run/netns/cni-c7043f39-beee-aa7f-9ea1-f97564ddbf33" Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:06.956 [INFO][4588] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:06.956 [INFO][4588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:07.048 [INFO][4636] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" HandleID="k8s-pod-network.cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:07.048 [INFO][4636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:07.090 [INFO][4636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:07.143 [WARNING][4636] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" HandleID="k8s-pod-network.cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:07.143 [INFO][4636] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" HandleID="k8s-pod-network.cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:07.181 [INFO][4636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:07.204899 containerd[1507]: 2026-01-17 01:32:07.195 [INFO][4588] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:07.209477 containerd[1507]: time="2026-01-17T01:32:07.209242653Z" level=info msg="TearDown network for sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\" successfully" Jan 17 01:32:07.209477 containerd[1507]: time="2026-01-17T01:32:07.209284647Z" level=info msg="StopPodSandbox for \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\" returns successfully" Jan 17 01:32:07.210926 systemd[1]: run-netns-cni\x2dc7043f39\x2dbeee\x2daa7f\x2d9ea1\x2df97564ddbf33.mount: Deactivated successfully. Jan 17 01:32:07.212672 containerd[1507]: time="2026-01-17T01:32:07.212245162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fkwl,Uid:60848184-c0a6-4a54-ba29-7889e424733e,Namespace:kube-system,Attempt:1,}" Jan 17 01:32:07.310919 containerd[1507]: time="2026-01-17T01:32:07.310775503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:32:07.311135 containerd[1507]: time="2026-01-17T01:32:07.310949984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:32:07.311135 containerd[1507]: time="2026-01-17T01:32:07.311019520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:07.321167 containerd[1507]: time="2026-01-17T01:32:07.319428697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:07.366218 containerd[1507]: time="2026-01-17T01:32:07.366043520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594455f78-c92j5,Uid:307224a1-2fa9-44a1-ad77-684cc2300054,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea\"" Jan 17 01:32:07.376555 containerd[1507]: time="2026-01-17T01:32:07.376478338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:32:07.382656 systemd[1]: Started cri-containerd-8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f.scope - libcontainer container 8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f. Jan 17 01:32:07.540633 containerd[1507]: time="2026-01-17T01:32:07.540564647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c7d5775b-t6c4g,Uid:50610d67-f39f-4c35-8e6d-c6596bfafc13,Namespace:calico-system,Attempt:1,} returns sandbox id \"98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2\"" Jan 17 01:32:07.680279 systemd-networkd[1418]: vxlan.calico: Link UP Jan 17 01:32:07.680293 systemd-networkd[1418]: vxlan.calico: Gained carrier Jan 17 01:32:07.704290 containerd[1507]: time="2026-01-17T01:32:07.704210511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594455f78-9rjtt,Uid:166a9b11-96f4-43ec-a822-357f748e3c20,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f\"" Jan 17 01:32:07.711404 containerd[1507]: time="2026-01-17T01:32:07.711086382Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:07.721325 containerd[1507]: time="2026-01-17T01:32:07.720960147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:32:07.721325 containerd[1507]: time="2026-01-17T01:32:07.721035654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:32:07.726529 kubelet[2681]: E0117 01:32:07.724586 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:07.726529 kubelet[2681]: E0117 01:32:07.725477 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:07.729813 containerd[1507]: time="2026-01-17T01:32:07.726251099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 01:32:07.731321 kubelet[2681]: E0117 01:32:07.726551 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6rcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594455f78-c92j5_calico-apiserver(307224a1-2fa9-44a1-ad77-684cc2300054): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:07.731321 kubelet[2681]: E0117 01:32:07.728383 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:32:07.730578 systemd-networkd[1418]: cali071417e8f53: Link UP Jan 17 01:32:07.733930 systemd-networkd[1418]: cali071417e8f53: Gained carrier Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.472 [INFO][4705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0 coredns-668d6bf9bc- kube-system 60848184-c0a6-4a54-ba29-7889e424733e 984 0 2026-01-17 01:31:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-dv3jc.gb1.brightbox.com coredns-668d6bf9bc-9fkwl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali071417e8f53 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fkwl" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.474 [INFO][4705] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fkwl" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.589 [INFO][4751] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" HandleID="k8s-pod-network.322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.591 [INFO][4751] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" HandleID="k8s-pod-network.322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123e30), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-dv3jc.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-9fkwl", "timestamp":"2026-01-17 01:32:07.589545641 +0000 UTC"}, Hostname:"srv-dv3jc.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.591 [INFO][4751] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.591 [INFO][4751] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.591 [INFO][4751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dv3jc.gb1.brightbox.com' Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.608 [INFO][4751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.630 [INFO][4751] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.644 [INFO][4751] ipam/ipam.go 511: Trying affinity for 192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.651 [INFO][4751] ipam/ipam.go 158: Attempting to load block cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.659 [INFO][4751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.60.128/26 host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.659 [INFO][4751] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.60.128/26 handle="k8s-pod-network.322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.668 [INFO][4751] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563 Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.688 [INFO][4751] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.60.128/26 handle="k8s-pod-network.322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.707 [INFO][4751] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.60.136/26] block=192.168.60.128/26 handle="k8s-pod-network.322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.707 [INFO][4751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.60.136/26] handle="k8s-pod-network.322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" host="srv-dv3jc.gb1.brightbox.com" Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.707 [INFO][4751] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:07.777091 containerd[1507]: 2026-01-17 01:32:07.707 [INFO][4751] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.60.136/26] IPv6=[] ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" HandleID="k8s-pod-network.322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:07.778060 containerd[1507]: 2026-01-17 01:32:07.713 [INFO][4705] cni-plugin/k8s.go 418: Populated endpoint ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fkwl" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"60848184-c0a6-4a54-ba29-7889e424733e", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-9fkwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali071417e8f53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:07.778060 containerd[1507]: 2026-01-17 01:32:07.714 [INFO][4705] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.136/32] ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fkwl" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:07.778060 containerd[1507]: 2026-01-17 01:32:07.714 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali071417e8f53 ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fkwl" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:07.778060 containerd[1507]: 2026-01-17 01:32:07.755 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fkwl" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:07.778060 containerd[1507]: 2026-01-17 01:32:07.756 [INFO][4705] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fkwl" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"60848184-c0a6-4a54-ba29-7889e424733e", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563", Pod:"coredns-668d6bf9bc-9fkwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali071417e8f53", MAC:"d2:48:0e:58:ad:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:07.778060 containerd[1507]: 2026-01-17 01:32:07.768 [INFO][4705] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563" Namespace="kube-system" Pod="coredns-668d6bf9bc-9fkwl" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:07.822045 containerd[1507]: time="2026-01-17T01:32:07.821608955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:32:07.822045 containerd[1507]: time="2026-01-17T01:32:07.821727198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:32:07.822045 containerd[1507]: time="2026-01-17T01:32:07.821746277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:07.822045 containerd[1507]: time="2026-01-17T01:32:07.821883969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:32:07.888030 systemd[1]: run-containerd-runc-k8s.io-322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563-runc.kxH2LC.mount: Deactivated successfully. Jan 17 01:32:07.907441 systemd[1]: Started cri-containerd-322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563.scope - libcontainer container 322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563. Jan 17 01:32:08.031768 containerd[1507]: time="2026-01-17T01:32:08.031594683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9fkwl,Uid:60848184-c0a6-4a54-ba29-7889e424733e,Namespace:kube-system,Attempt:1,} returns sandbox id \"322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563\"" Jan 17 01:32:08.042460 containerd[1507]: time="2026-01-17T01:32:08.042266253Z" level=info msg="CreateContainer within sandbox \"322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 01:32:08.072214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072115837.mount: Deactivated successfully. Jan 17 01:32:08.072841 containerd[1507]: time="2026-01-17T01:32:08.072740255Z" level=info msg="CreateContainer within sandbox \"322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61574cd8c22844b1da624c75426d7c3983790dd99eb0cb45c706f13aed5e7c75\"" Jan 17 01:32:08.075234 containerd[1507]: time="2026-01-17T01:32:08.074263443Z" level=info msg="StartContainer for \"61574cd8c22844b1da624c75426d7c3983790dd99eb0cb45c706f13aed5e7c75\"" Jan 17 01:32:08.075530 containerd[1507]: time="2026-01-17T01:32:08.075500789Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:08.076889 containerd[1507]: time="2026-01-17T01:32:08.076806945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 01:32:08.077255 containerd[1507]: time="2026-01-17T01:32:08.077201848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 01:32:08.077538 kubelet[2681]: E0117 01:32:08.077485 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:32:08.077633 kubelet[2681]: E0117 01:32:08.077554 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:32:08.077899 kubelet[2681]: E0117 01:32:08.077830 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qz5x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c7d5775b-t6c4g_calico-system(50610d67-f39f-4c35-8e6d-c6596bfafc13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:08.078423 containerd[1507]: time="2026-01-17T01:32:08.078373326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:32:08.079005 kubelet[2681]: E0117 01:32:08.078946 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:32:08.139424 systemd[1]: Started cri-containerd-61574cd8c22844b1da624c75426d7c3983790dd99eb0cb45c706f13aed5e7c75.scope - libcontainer container 61574cd8c22844b1da624c75426d7c3983790dd99eb0cb45c706f13aed5e7c75. Jan 17 01:32:08.157457 systemd-networkd[1418]: cali25c2ac32e38: Gained IPv6LL Jan 17 01:32:08.186620 containerd[1507]: time="2026-01-17T01:32:08.186456191Z" level=info msg="StartContainer for \"61574cd8c22844b1da624c75426d7c3983790dd99eb0cb45c706f13aed5e7c75\" returns successfully" Jan 17 01:32:08.244978 kubelet[2681]: E0117 01:32:08.244875 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:32:08.246872 kubelet[2681]: E0117 01:32:08.246827 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:32:08.278097 kubelet[2681]: I0117 01:32:08.278020 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9fkwl" podStartSLOduration=51.277998711 podStartE2EDuration="51.277998711s" podCreationTimestamp="2026-01-17 01:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:32:08.277329978 +0000 UTC m=+55.865447193" watchObservedRunningTime="2026-01-17 01:32:08.277998711 +0000 UTC m=+55.866115925" Jan 17 01:32:08.419566 containerd[1507]: time="2026-01-17T01:32:08.419201688Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:08.423098 containerd[1507]: time="2026-01-17T01:32:08.423032236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:32:08.423427 containerd[1507]: time="2026-01-17T01:32:08.423157093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:32:08.424034 kubelet[2681]: E0117 01:32:08.423644 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:08.424034 kubelet[2681]: E0117 01:32:08.423723 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:08.424034 kubelet[2681]: E0117 01:32:08.423901 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c45pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594455f78-9rjtt_calico-apiserver(166a9b11-96f4-43ec-a822-357f748e3c20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:08.427263 kubelet[2681]: E0117 01:32:08.427181 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:32:08.733500 systemd-networkd[1418]: cali5060708eabe: Gained IPv6LL Jan 17 01:32:08.797364 systemd-networkd[1418]: calidc29762e9c3: Gained IPv6LL Jan 17 01:32:08.989403 systemd-networkd[1418]: vxlan.calico: Gained IPv6LL Jan 17 01:32:09.251321 kubelet[2681]: E0117 01:32:09.250476 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:32:09.251321 kubelet[2681]: E0117 01:32:09.250909 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:32:09.251321 kubelet[2681]: E0117 01:32:09.250996 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:32:09.629569 systemd-networkd[1418]: cali071417e8f53: Gained IPv6LL Jan 17 01:32:12.660374 containerd[1507]: time="2026-01-17T01:32:12.660300172Z" level=info msg="StopPodSandbox for \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\"" Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.727 [WARNING][4956] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0", GenerateName:"calico-apiserver-6594455f78-", Namespace:"calico-apiserver", SelfLink:"", UID:"307224a1-2fa9-44a1-ad77-684cc2300054", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594455f78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea", Pod:"calico-apiserver-6594455f78-c92j5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25c2ac32e38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.728 [INFO][4956] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.728 [INFO][4956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" iface="eth0" netns="" Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.728 [INFO][4956] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.728 [INFO][4956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.779 [INFO][4965] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" HandleID="k8s-pod-network.df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.780 [INFO][4965] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.781 [INFO][4965] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.793 [WARNING][4965] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" HandleID="k8s-pod-network.df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.793 [INFO][4965] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" HandleID="k8s-pod-network.df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.796 [INFO][4965] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:12.800582 containerd[1507]: 2026-01-17 01:32:12.798 [INFO][4956] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:12.801796 containerd[1507]: time="2026-01-17T01:32:12.800623075Z" level=info msg="TearDown network for sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\" successfully" Jan 17 01:32:12.801796 containerd[1507]: time="2026-01-17T01:32:12.800663545Z" level=info msg="StopPodSandbox for \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\" returns successfully" Jan 17 01:32:12.811181 containerd[1507]: time="2026-01-17T01:32:12.810214674Z" level=info msg="RemovePodSandbox for \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\"" Jan 17 01:32:12.811181 containerd[1507]: time="2026-01-17T01:32:12.810339303Z" level=info msg="Forcibly stopping sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\"" Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.870 [WARNING][4981] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0", GenerateName:"calico-apiserver-6594455f78-", Namespace:"calico-apiserver", SelfLink:"", UID:"307224a1-2fa9-44a1-ad77-684cc2300054", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594455f78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"ffa570718620a26d175d0eb393bc3c4748ef734b4698b0d7c9cc13759e66a9ea", Pod:"calico-apiserver-6594455f78-c92j5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25c2ac32e38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.870 [INFO][4981] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.870 [INFO][4981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" iface="eth0" netns="" Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.870 [INFO][4981] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.870 [INFO][4981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.906 [INFO][4988] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" HandleID="k8s-pod-network.df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.907 [INFO][4988] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.907 [INFO][4988] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.921 [WARNING][4988] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" HandleID="k8s-pod-network.df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.921 [INFO][4988] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" HandleID="k8s-pod-network.df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--c92j5-eth0" Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.925 [INFO][4988] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:12.930672 containerd[1507]: 2026-01-17 01:32:12.928 [INFO][4981] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3" Jan 17 01:32:12.932806 containerd[1507]: time="2026-01-17T01:32:12.930697618Z" level=info msg="TearDown network for sandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\" successfully" Jan 17 01:32:12.946091 containerd[1507]: time="2026-01-17T01:32:12.946011243Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:32:12.946264 containerd[1507]: time="2026-01-17T01:32:12.946163454Z" level=info msg="RemovePodSandbox \"df877c7b87720489fe5d57fe7cf44a5ed38897e107d1333b308ec4954902d0c3\" returns successfully" Jan 17 01:32:12.947715 containerd[1507]: time="2026-01-17T01:32:12.947192785Z" level=info msg="StopPodSandbox for \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\"" Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.003 [WARNING][5003] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a924e366-2268-4f4b-91a1-779a1cb6d303", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e", Pod:"goldmane-666569f655-vdtbx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76895980ec9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.003 [INFO][5003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.003 [INFO][5003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" iface="eth0" netns="" Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.003 [INFO][5003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.004 [INFO][5003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.037 [INFO][5010] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" HandleID="k8s-pod-network.0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.038 [INFO][5010] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.038 [INFO][5010] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.047 [WARNING][5010] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" HandleID="k8s-pod-network.0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.048 [INFO][5010] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" HandleID="k8s-pod-network.0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.053 [INFO][5010] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:13.058188 containerd[1507]: 2026-01-17 01:32:13.055 [INFO][5003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:13.060068 containerd[1507]: time="2026-01-17T01:32:13.059148181Z" level=info msg="TearDown network for sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\" successfully" Jan 17 01:32:13.060068 containerd[1507]: time="2026-01-17T01:32:13.059214176Z" level=info msg="StopPodSandbox for \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\" returns successfully" Jan 17 01:32:13.060711 containerd[1507]: time="2026-01-17T01:32:13.060479301Z" level=info msg="RemovePodSandbox for \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\"" Jan 17 01:32:13.060711 containerd[1507]: time="2026-01-17T01:32:13.060524274Z" level=info msg="Forcibly stopping sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\"" Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.117 [WARNING][5024] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a924e366-2268-4f4b-91a1-779a1cb6d303", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"58e46d3b823e6b06dab346554f15e8e2009bba2fc30faab3ed365d651230f27e", Pod:"goldmane-666569f655-vdtbx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76895980ec9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.118 [INFO][5024] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.118 [INFO][5024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" iface="eth0" netns="" Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.118 [INFO][5024] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.118 [INFO][5024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.169 [INFO][5031] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" HandleID="k8s-pod-network.0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.173 [INFO][5031] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.173 [INFO][5031] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.184 [WARNING][5031] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" HandleID="k8s-pod-network.0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.185 [INFO][5031] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" HandleID="k8s-pod-network.0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Workload="srv--dv3jc.gb1.brightbox.com-k8s-goldmane--666569f655--vdtbx-eth0" Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.189 [INFO][5031] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:13.194076 containerd[1507]: 2026-01-17 01:32:13.191 [INFO][5024] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276" Jan 17 01:32:13.194839 containerd[1507]: time="2026-01-17T01:32:13.194078798Z" level=info msg="TearDown network for sandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\" successfully" Jan 17 01:32:13.200462 containerd[1507]: time="2026-01-17T01:32:13.200393833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:32:13.200572 containerd[1507]: time="2026-01-17T01:32:13.200512444Z" level=info msg="RemovePodSandbox \"0663ff22bcd80776be8fef432ad5a73835a100051aae284e57bbe39f977e5276\" returns successfully" Jan 17 01:32:13.201334 containerd[1507]: time="2026-01-17T01:32:13.201280112Z" level=info msg="StopPodSandbox for \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\"" Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.503 [WARNING][5046] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a82681c-7367-4e06-9a33-de4eeb86a08d", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2", Pod:"coredns-668d6bf9bc-j2ntp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali010329f8a72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.504 [INFO][5046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.504 [INFO][5046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" iface="eth0" netns="" Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.504 [INFO][5046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.504 [INFO][5046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.549 [INFO][5053] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" HandleID="k8s-pod-network.ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.549 [INFO][5053] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.549 [INFO][5053] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.559 [WARNING][5053] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" HandleID="k8s-pod-network.ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.559 [INFO][5053] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" HandleID="k8s-pod-network.ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.563 [INFO][5053] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:13.569465 containerd[1507]: 2026-01-17 01:32:13.565 [INFO][5046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:13.569465 containerd[1507]: time="2026-01-17T01:32:13.569061038Z" level=info msg="TearDown network for sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\" successfully" Jan 17 01:32:13.569465 containerd[1507]: time="2026-01-17T01:32:13.569104173Z" level=info msg="StopPodSandbox for \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\" returns successfully" Jan 17 01:32:13.571498 containerd[1507]: time="2026-01-17T01:32:13.569952384Z" level=info msg="RemovePodSandbox for \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\"" Jan 17 01:32:13.571498 containerd[1507]: time="2026-01-17T01:32:13.569991502Z" level=info msg="Forcibly stopping sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\"" Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.631 [WARNING][5067] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4a82681c-7367-4e06-9a33-de4eeb86a08d", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"229234724328c2f27e8f9fe84b96f8cf8f544d1fad0451511218136f960d38e2", Pod:"coredns-668d6bf9bc-j2ntp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali010329f8a72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.632 [INFO][5067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.632 [INFO][5067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" iface="eth0" netns="" Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.632 [INFO][5067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.632 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.677 [INFO][5076] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" HandleID="k8s-pod-network.ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.677 [INFO][5076] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.677 [INFO][5076] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.688 [WARNING][5076] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" HandleID="k8s-pod-network.ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.688 [INFO][5076] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" HandleID="k8s-pod-network.ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--j2ntp-eth0" Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.691 [INFO][5076] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:13.696956 containerd[1507]: 2026-01-17 01:32:13.694 [INFO][5067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92" Jan 17 01:32:13.698364 containerd[1507]: time="2026-01-17T01:32:13.698232823Z" level=info msg="TearDown network for sandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\" successfully" Jan 17 01:32:13.702896 containerd[1507]: time="2026-01-17T01:32:13.702840835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:32:13.702983 containerd[1507]: time="2026-01-17T01:32:13.702936395Z" level=info msg="RemovePodSandbox \"ed66ed23d9c996d5b36fcce8ae56d4ec765a4be077b8c1457761cd0b4d18cc92\" returns successfully" Jan 17 01:32:13.704403 containerd[1507]: time="2026-01-17T01:32:13.703885224Z" level=info msg="StopPodSandbox for \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\"" Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:13.766 [WARNING][5094] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:13.766 [INFO][5094] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:13.766 [INFO][5094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" iface="eth0" netns="" Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:13.766 [INFO][5094] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:13.766 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:13.803 [INFO][5101] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" HandleID="k8s-pod-network.2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:13.803 [INFO][5101] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:13.803 [INFO][5101] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:13.815 [WARNING][5101] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" HandleID="k8s-pod-network.2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:13.815 [INFO][5101] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" HandleID="k8s-pod-network.2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:14.023 [INFO][5101] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:14.028951 containerd[1507]: 2026-01-17 01:32:14.026 [INFO][5094] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:14.029798 containerd[1507]: time="2026-01-17T01:32:14.028969378Z" level=info msg="TearDown network for sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\" successfully" Jan 17 01:32:14.029798 containerd[1507]: time="2026-01-17T01:32:14.029021493Z" level=info msg="StopPodSandbox for \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\" returns successfully" Jan 17 01:32:14.030539 containerd[1507]: time="2026-01-17T01:32:14.030444771Z" level=info msg="RemovePodSandbox for \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\"" Jan 17 01:32:14.030620 containerd[1507]: time="2026-01-17T01:32:14.030565303Z" level=info msg="Forcibly stopping sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\"" Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.094 [WARNING][5117] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" WorkloadEndpoint="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.094 [INFO][5117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.094 [INFO][5117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" iface="eth0" netns="" Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.095 [INFO][5117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.095 [INFO][5117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.134 [INFO][5124] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" HandleID="k8s-pod-network.2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.134 [INFO][5124] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.134 [INFO][5124] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.146 [WARNING][5124] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" HandleID="k8s-pod-network.2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.146 [INFO][5124] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" HandleID="k8s-pod-network.2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Workload="srv--dv3jc.gb1.brightbox.com-k8s-whisker--84769948d8--jfj2k-eth0" Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.149 [INFO][5124] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:14.155179 containerd[1507]: 2026-01-17 01:32:14.152 [INFO][5117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23" Jan 17 01:32:14.155179 containerd[1507]: time="2026-01-17T01:32:14.154951543Z" level=info msg="TearDown network for sandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\" successfully" Jan 17 01:32:14.160098 containerd[1507]: time="2026-01-17T01:32:14.160059749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:32:14.160427 containerd[1507]: time="2026-01-17T01:32:14.160279500Z" level=info msg="RemovePodSandbox \"2a7ec44068e35b1b67d849e007f92d82b8356a7ff121e18bcbc881b5978f2d23\" returns successfully" Jan 17 01:32:14.161144 containerd[1507]: time="2026-01-17T01:32:14.161078288Z" level=info msg="StopPodSandbox for \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\"" Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.213 [WARNING][5138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0", GenerateName:"calico-kube-controllers-c7d5775b-", Namespace:"calico-system", SelfLink:"", UID:"50610d67-f39f-4c35-8e6d-c6596bfafc13", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7d5775b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2", Pod:"calico-kube-controllers-c7d5775b-t6c4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5060708eabe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.214 [INFO][5138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.214 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" iface="eth0" netns="" Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.214 [INFO][5138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.214 [INFO][5138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.243 [INFO][5145] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" HandleID="k8s-pod-network.b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.243 [INFO][5145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.243 [INFO][5145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.256 [WARNING][5145] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" HandleID="k8s-pod-network.b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.256 [INFO][5145] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" HandleID="k8s-pod-network.b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.259 [INFO][5145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:14.263382 containerd[1507]: 2026-01-17 01:32:14.261 [INFO][5138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:14.264083 containerd[1507]: time="2026-01-17T01:32:14.263444010Z" level=info msg="TearDown network for sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\" successfully" Jan 17 01:32:14.264083 containerd[1507]: time="2026-01-17T01:32:14.263489297Z" level=info msg="StopPodSandbox for \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\" returns successfully" Jan 17 01:32:14.264465 containerd[1507]: time="2026-01-17T01:32:14.264421789Z" level=info msg="RemovePodSandbox for \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\"" Jan 17 01:32:14.264531 containerd[1507]: time="2026-01-17T01:32:14.264469258Z" level=info msg="Forcibly stopping sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\"" Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.321 [WARNING][5159] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0", GenerateName:"calico-kube-controllers-c7d5775b-", Namespace:"calico-system", SelfLink:"", UID:"50610d67-f39f-4c35-8e6d-c6596bfafc13", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c7d5775b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"98a144bf49556aa90ab5edc88cb7d31a99b2f1678c9e5765b21b1d55c20bcfc2", Pod:"calico-kube-controllers-c7d5775b-t6c4g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5060708eabe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.322 [INFO][5159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.322 [INFO][5159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" iface="eth0" netns="" Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.322 [INFO][5159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.322 [INFO][5159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.356 [INFO][5166] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" HandleID="k8s-pod-network.b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.357 [INFO][5166] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.357 [INFO][5166] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.367 [WARNING][5166] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" HandleID="k8s-pod-network.b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.367 [INFO][5166] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" HandleID="k8s-pod-network.b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--kube--controllers--c7d5775b--t6c4g-eth0" Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.370 [INFO][5166] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:14.376033 containerd[1507]: 2026-01-17 01:32:14.372 [INFO][5159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c" Jan 17 01:32:14.376033 containerd[1507]: time="2026-01-17T01:32:14.375994480Z" level=info msg="TearDown network for sandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\" successfully" Jan 17 01:32:14.381539 containerd[1507]: time="2026-01-17T01:32:14.381461394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:32:14.381625 containerd[1507]: time="2026-01-17T01:32:14.381557786Z" level=info msg="RemovePodSandbox \"b74ed69ccf8e27a3ff2d170ff78777fb920cbb68323fe4e0677fd9d658fb988c\" returns successfully" Jan 17 01:32:14.382273 containerd[1507]: time="2026-01-17T01:32:14.382232004Z" level=info msg="StopPodSandbox for \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\"" Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.442 [WARNING][5180] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"60848184-c0a6-4a54-ba29-7889e424733e", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563", Pod:"coredns-668d6bf9bc-9fkwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali071417e8f53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.442 [INFO][5180] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.442 [INFO][5180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" iface="eth0" netns="" Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.442 [INFO][5180] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.442 [INFO][5180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.472 [INFO][5187] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" HandleID="k8s-pod-network.cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.472 [INFO][5187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.472 [INFO][5187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.483 [WARNING][5187] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" HandleID="k8s-pod-network.cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.483 [INFO][5187] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" HandleID="k8s-pod-network.cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.486 [INFO][5187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:14.491188 containerd[1507]: 2026-01-17 01:32:14.488 [INFO][5180] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:14.491188 containerd[1507]: time="2026-01-17T01:32:14.490842218Z" level=info msg="TearDown network for sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\" successfully" Jan 17 01:32:14.491188 containerd[1507]: time="2026-01-17T01:32:14.490893047Z" level=info msg="StopPodSandbox for \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\" returns successfully" Jan 17 01:32:14.492236 containerd[1507]: time="2026-01-17T01:32:14.491679105Z" level=info msg="RemovePodSandbox for \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\"" Jan 17 01:32:14.492236 containerd[1507]: time="2026-01-17T01:32:14.491737197Z" level=info msg="Forcibly stopping sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\"" Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.542 [WARNING][5201] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"60848184-c0a6-4a54-ba29-7889e424733e", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"322f188ee6922cf6a37883257f63c378fdd95bf4e9bc82f62af3fd60d07db563", Pod:"coredns-668d6bf9bc-9fkwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali071417e8f53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.544 [INFO][5201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.544 [INFO][5201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" iface="eth0" netns="" Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.545 [INFO][5201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.545 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.596 [INFO][5208] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" HandleID="k8s-pod-network.cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.597 [INFO][5208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.597 [INFO][5208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.608 [WARNING][5208] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" HandleID="k8s-pod-network.cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.608 [INFO][5208] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" HandleID="k8s-pod-network.cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Workload="srv--dv3jc.gb1.brightbox.com-k8s-coredns--668d6bf9bc--9fkwl-eth0" Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.610 [INFO][5208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:14.615926 containerd[1507]: 2026-01-17 01:32:14.613 [INFO][5201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4" Jan 17 01:32:14.617253 containerd[1507]: time="2026-01-17T01:32:14.615971437Z" level=info msg="TearDown network for sandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\" successfully" Jan 17 01:32:14.626452 containerd[1507]: time="2026-01-17T01:32:14.626297431Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:32:14.626452 containerd[1507]: time="2026-01-17T01:32:14.626380206Z" level=info msg="RemovePodSandbox \"cc477d579a82384dc7038a5e18a74f401f2952868b378b732faf507b4dfa12a4\" returns successfully" Jan 17 01:32:14.627564 containerd[1507]: time="2026-01-17T01:32:14.627033654Z" level=info msg="StopPodSandbox for \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\"" Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.681 [WARNING][5223] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0", GenerateName:"calico-apiserver-6594455f78-", Namespace:"calico-apiserver", SelfLink:"", UID:"166a9b11-96f4-43ec-a822-357f748e3c20", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594455f78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f", Pod:"calico-apiserver-6594455f78-9rjtt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc29762e9c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.681 [INFO][5223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.682 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" iface="eth0" netns="" Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.682 [INFO][5223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.682 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.712 [INFO][5230] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" HandleID="k8s-pod-network.133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.712 [INFO][5230] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.712 [INFO][5230] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.722 [WARNING][5230] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" HandleID="k8s-pod-network.133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.722 [INFO][5230] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" HandleID="k8s-pod-network.133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.725 [INFO][5230] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:14.730058 containerd[1507]: 2026-01-17 01:32:14.727 [INFO][5223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:14.732157 containerd[1507]: time="2026-01-17T01:32:14.731150565Z" level=info msg="TearDown network for sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\" successfully" Jan 17 01:32:14.732157 containerd[1507]: time="2026-01-17T01:32:14.731217789Z" level=info msg="StopPodSandbox for \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\" returns successfully" Jan 17 01:32:14.732565 containerd[1507]: time="2026-01-17T01:32:14.732524045Z" level=info msg="RemovePodSandbox for \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\"" Jan 17 01:32:14.732644 containerd[1507]: time="2026-01-17T01:32:14.732596599Z" level=info msg="Forcibly stopping sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\"" Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.790 [WARNING][5244] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0", GenerateName:"calico-apiserver-6594455f78-", Namespace:"calico-apiserver", SelfLink:"", UID:"166a9b11-96f4-43ec-a822-357f748e3c20", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594455f78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"8fd43ede9c617de7a37ad5942c9e84b873084e27cb12f9c7201ab570f7442c8f", Pod:"calico-apiserver-6594455f78-9rjtt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc29762e9c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.790 [INFO][5244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.790 [INFO][5244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" iface="eth0" netns="" Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.790 [INFO][5244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.790 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.825 [INFO][5251] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" HandleID="k8s-pod-network.133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.825 [INFO][5251] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.825 [INFO][5251] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.834 [WARNING][5251] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" HandleID="k8s-pod-network.133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.834 [INFO][5251] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" HandleID="k8s-pod-network.133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Workload="srv--dv3jc.gb1.brightbox.com-k8s-calico--apiserver--6594455f78--9rjtt-eth0" Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.837 [INFO][5251] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:14.842226 containerd[1507]: 2026-01-17 01:32:14.839 [INFO][5244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d" Jan 17 01:32:14.843552 containerd[1507]: time="2026-01-17T01:32:14.842369404Z" level=info msg="TearDown network for sandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\" successfully" Jan 17 01:32:14.848694 containerd[1507]: time="2026-01-17T01:32:14.848546452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:32:14.848694 containerd[1507]: time="2026-01-17T01:32:14.848625347Z" level=info msg="RemovePodSandbox \"133a041396a3bfdca90ffe6a03a3cd3e11949479fb2be79a6d21714fa8a2605d\" returns successfully" Jan 17 01:32:14.849981 containerd[1507]: time="2026-01-17T01:32:14.849461649Z" level=info msg="StopPodSandbox for \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\"" Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.906 [WARNING][5265] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c16de311-2d09-4fff-8444-304a8ff3b2b5", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513", Pod:"csi-node-driver-crszt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad4502ce9ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.906 [INFO][5265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.906 [INFO][5265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" iface="eth0" netns="" Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.906 [INFO][5265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.906 [INFO][5265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.944 [INFO][5272] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" HandleID="k8s-pod-network.a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.944 [INFO][5272] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.944 [INFO][5272] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.965 [WARNING][5272] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" HandleID="k8s-pod-network.a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.965 [INFO][5272] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" HandleID="k8s-pod-network.a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.977 [INFO][5272] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:14.982870 containerd[1507]: 2026-01-17 01:32:14.980 [INFO][5265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:14.983732 containerd[1507]: time="2026-01-17T01:32:14.982920233Z" level=info msg="TearDown network for sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\" successfully" Jan 17 01:32:14.983732 containerd[1507]: time="2026-01-17T01:32:14.982956687Z" level=info msg="StopPodSandbox for \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\" returns successfully" Jan 17 01:32:14.983732 containerd[1507]: time="2026-01-17T01:32:14.983687508Z" level=info msg="RemovePodSandbox for \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\"" Jan 17 01:32:14.983732 containerd[1507]: time="2026-01-17T01:32:14.983721995Z" level=info msg="Forcibly stopping sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\"" Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.040 [WARNING][5287] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c16de311-2d09-4fff-8444-304a8ff3b2b5", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dv3jc.gb1.brightbox.com", ContainerID:"6bc786fcad934968a5d4651cef0dd9d294af6c6ee5c21ee84ebd37a9c031b513", Pod:"csi-node-driver-crszt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad4502ce9ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.041 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.041 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" iface="eth0" netns="" Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.041 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.041 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.077 [INFO][5294] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" HandleID="k8s-pod-network.a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.077 [INFO][5294] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.077 [INFO][5294] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.096 [WARNING][5294] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" HandleID="k8s-pod-network.a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.096 [INFO][5294] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" HandleID="k8s-pod-network.a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Workload="srv--dv3jc.gb1.brightbox.com-k8s-csi--node--driver--crszt-eth0" Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.104 [INFO][5294] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:32:15.109382 containerd[1507]: 2026-01-17 01:32:15.107 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b" Jan 17 01:32:15.110424 containerd[1507]: time="2026-01-17T01:32:15.109468029Z" level=info msg="TearDown network for sandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\" successfully" Jan 17 01:32:15.113144 containerd[1507]: time="2026-01-17T01:32:15.113085739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:32:15.113219 containerd[1507]: time="2026-01-17T01:32:15.113170860Z" level=info msg="RemovePodSandbox \"a7f3301118bad2dc68743b7a49f2402d1b38edf61caa6acbf7c8ab4a960f936b\" returns successfully" Jan 17 01:32:18.676245 containerd[1507]: time="2026-01-17T01:32:18.675284365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 01:32:18.987540 containerd[1507]: time="2026-01-17T01:32:18.987448096Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:18.989663 containerd[1507]: time="2026-01-17T01:32:18.989602319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 01:32:18.989943 containerd[1507]: time="2026-01-17T01:32:18.989635988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 01:32:18.990168 kubelet[2681]: E0117 01:32:18.990086 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:32:18.990880 kubelet[2681]: E0117 01:32:18.990209 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:32:18.990880 kubelet[2681]: E0117 01:32:18.990467 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfbnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-crszt_calico-system(c16de311-2d09-4fff-8444-304a8ff3b2b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:18.994024 containerd[1507]: time="2026-01-17T01:32:18.993854199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 01:32:19.301684 containerd[1507]: time="2026-01-17T01:32:19.301410196Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:19.303145 containerd[1507]: time="2026-01-17T01:32:19.303076059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 01:32:19.303243 containerd[1507]: time="2026-01-17T01:32:19.303128859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 01:32:19.303606 kubelet[2681]: E0117 01:32:19.303382 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:32:19.303606 kubelet[2681]: E0117 01:32:19.303455 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:32:19.304546 kubelet[2681]: E0117 01:32:19.303950 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfbnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-crszt_calico-system(c16de311-2d09-4fff-8444-304a8ff3b2b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:19.305752 kubelet[2681]: E0117 01:32:19.305659 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:32:19.676573 containerd[1507]: time="2026-01-17T01:32:19.675869685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 01:32:19.982656 containerd[1507]: time="2026-01-17T01:32:19.982373019Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:19.990580 containerd[1507]: time="2026-01-17T01:32:19.990510052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 01:32:19.990753 containerd[1507]: time="2026-01-17T01:32:19.990664884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 01:32:19.991203 kubelet[2681]: E0117 01:32:19.991134 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:32:19.991726 kubelet[2681]: E0117 01:32:19.991230 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:32:19.991726 kubelet[2681]: E0117 01:32:19.991410 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:184225ead8aa4834ae5f2781753a20a4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm6gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9598fc574-mrl8b_calico-system(93088e93-2e87-4fb7-ba1f-ee13328ea623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:19.995056 containerd[1507]: time="2026-01-17T01:32:19.994698078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 01:32:20.307462 containerd[1507]: time="2026-01-17T01:32:20.307237026Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:20.310099 containerd[1507]: time="2026-01-17T01:32:20.310041549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 01:32:20.310267 containerd[1507]: time="2026-01-17T01:32:20.310051191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 01:32:20.310871 kubelet[2681]: E0117 01:32:20.310411 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:32:20.310871 kubelet[2681]: E0117 01:32:20.310494 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:32:20.310871 kubelet[2681]: E0117 01:32:20.310686 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm6gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9598fc574-mrl8b_calico-system(93088e93-2e87-4fb7-ba1f-ee13328ea623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:20.312453 kubelet[2681]: E0117 01:32:20.312257 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9598fc574-mrl8b" podUID="93088e93-2e87-4fb7-ba1f-ee13328ea623" Jan 17 01:32:20.676444 containerd[1507]: time="2026-01-17T01:32:20.676121824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:32:21.003261 containerd[1507]: time="2026-01-17T01:32:21.003009211Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:21.006343 containerd[1507]: time="2026-01-17T01:32:21.006263770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:32:21.006586 containerd[1507]: time="2026-01-17T01:32:21.006393043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:32:21.006970 kubelet[2681]: E0117 01:32:21.006896 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:21.007392 kubelet[2681]: E0117 01:32:21.006989 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:21.007492 kubelet[2681]: E0117 01:32:21.007430 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6rcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594455f78-c92j5_calico-apiserver(307224a1-2fa9-44a1-ad77-684cc2300054): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:21.008421 containerd[1507]: time="2026-01-17T01:32:21.008347071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 01:32:21.008829 kubelet[2681]: E0117 01:32:21.008770 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:32:21.315305 containerd[1507]: time="2026-01-17T01:32:21.315057867Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:21.316894 containerd[1507]: time="2026-01-17T01:32:21.316806053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 01:32:21.317020 containerd[1507]: time="2026-01-17T01:32:21.316973953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 01:32:21.318328 kubelet[2681]: E0117 01:32:21.317418 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:32:21.318328 kubelet[2681]: E0117 01:32:21.317507 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:32:21.318328 kubelet[2681]: E0117 01:32:21.317820 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qz5x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c7d5775b-t6c4g_calico-system(50610d67-f39f-4c35-8e6d-c6596bfafc13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:21.320259 containerd[1507]: time="2026-01-17T01:32:21.318013735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 01:32:21.320315 kubelet[2681]: E0117 01:32:21.319501 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:32:21.625366 containerd[1507]: time="2026-01-17T01:32:21.625019998Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:21.626902 containerd[1507]: time="2026-01-17T01:32:21.626811969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 01:32:21.626902 containerd[1507]: time="2026-01-17T01:32:21.626942910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 01:32:21.627306 kubelet[2681]: E0117 01:32:21.627189 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:32:21.627306 kubelet[2681]: E0117 01:32:21.627275 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:32:21.628136 kubelet[2681]: E0117 01:32:21.627487 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6kbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vdtbx_calico-system(a924e366-2268-4f4b-91a1-779a1cb6d303): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:21.629615 kubelet[2681]: E0117 01:32:21.629574 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:32:21.675503 containerd[1507]: time="2026-01-17T01:32:21.675159740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:32:21.990289 containerd[1507]: time="2026-01-17T01:32:21.990197256Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:21.991533 containerd[1507]: time="2026-01-17T01:32:21.991488495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:32:21.991739 containerd[1507]: time="2026-01-17T01:32:21.991505698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:32:21.991884 kubelet[2681]: E0117 01:32:21.991829 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:21.991962 kubelet[2681]: E0117 01:32:21.991903 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:21.993336 kubelet[2681]: E0117 01:32:21.993224 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c45pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594455f78-9rjtt_calico-apiserver(166a9b11-96f4-43ec-a822-357f748e3c20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:21.994501 kubelet[2681]: E0117 01:32:21.994457 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:32:31.674207 kubelet[2681]: E0117 01:32:31.673952 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:32:31.674207 kubelet[2681]: E0117 01:32:31.674133 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:32:32.675910 kubelet[2681]: E0117 01:32:32.675794 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:32:33.675239 kubelet[2681]: E0117 01:32:33.674837 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9598fc574-mrl8b" podUID="93088e93-2e87-4fb7-ba1f-ee13328ea623" Jan 17 01:32:34.674672 kubelet[2681]: E0117 01:32:34.674490 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:32:36.674801 kubelet[2681]: E0117 01:32:36.674420 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:32:39.471314 systemd[1]: Started sshd@9-10.243.73.142:22-20.161.92.111:47860.service - OpenSSH per-connection server daemon (20.161.92.111:47860). Jan 17 01:32:40.104846 sshd[5350]: Accepted publickey for core from 20.161.92.111 port 47860 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:32:40.109462 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:32:40.121318 systemd-logind[1489]: New session 12 of user core. Jan 17 01:32:40.131413 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 01:32:41.106539 sshd[5350]: pam_unix(sshd:session): session closed for user core Jan 17 01:32:41.113343 systemd-logind[1489]: Session 12 logged out. Waiting for processes to exit. Jan 17 01:32:41.114799 systemd[1]: sshd@9-10.243.73.142:22-20.161.92.111:47860.service: Deactivated successfully. Jan 17 01:32:41.118847 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 01:32:41.120345 systemd-logind[1489]: Removed session 12. Jan 17 01:32:43.675782 containerd[1507]: time="2026-01-17T01:32:43.675618781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 01:32:44.046164 containerd[1507]: time="2026-01-17T01:32:44.046045470Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:44.047552 containerd[1507]: time="2026-01-17T01:32:44.047498714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 01:32:44.047701 containerd[1507]: time="2026-01-17T01:32:44.047623835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 01:32:44.047963 kubelet[2681]: E0117 01:32:44.047881 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:32:44.048664 kubelet[2681]: E0117 01:32:44.047976 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:32:44.048664 kubelet[2681]: E0117 01:32:44.048251 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qz5x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c7d5775b-t6c4g_calico-system(50610d67-f39f-4c35-8e6d-c6596bfafc13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:44.049614 kubelet[2681]: E0117 01:32:44.049508 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:32:44.676938 containerd[1507]: time="2026-01-17T01:32:44.676536129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 01:32:44.989730 containerd[1507]: time="2026-01-17T01:32:44.989477123Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:44.991248 containerd[1507]: time="2026-01-17T01:32:44.991182750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 01:32:44.991605 containerd[1507]: time="2026-01-17T01:32:44.991307396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 01:32:44.991809 kubelet[2681]: E0117 01:32:44.991699 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:32:44.991972 kubelet[2681]: E0117 01:32:44.991828 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:32:44.992486 kubelet[2681]: E0117 01:32:44.992417 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfbnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-crszt_calico-system(c16de311-2d09-4fff-8444-304a8ff3b2b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:44.993618 containerd[1507]: time="2026-01-17T01:32:44.993586361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:32:45.297298 containerd[1507]: time="2026-01-17T01:32:45.296838910Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:45.298667 containerd[1507]: time="2026-01-17T01:32:45.298467152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:32:45.298667 containerd[1507]: time="2026-01-17T01:32:45.298596112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:32:45.301222 kubelet[2681]: E0117 01:32:45.299055 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:45.301222 kubelet[2681]: E0117 01:32:45.299173 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:45.301222 kubelet[2681]: E0117 01:32:45.299654 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6rcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594455f78-c92j5_calico-apiserver(307224a1-2fa9-44a1-ad77-684cc2300054): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:45.301917 containerd[1507]: time="2026-01-17T01:32:45.299735359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 01:32:45.302420 kubelet[2681]: E0117 01:32:45.302246 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:32:45.607354 containerd[1507]: time="2026-01-17T01:32:45.607097526Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:45.608637 containerd[1507]: time="2026-01-17T01:32:45.608577675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 01:32:45.609074 containerd[1507]: time="2026-01-17T01:32:45.608702256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 01:32:45.609695 kubelet[2681]: E0117 01:32:45.609016 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:32:45.609695 kubelet[2681]: E0117 01:32:45.609084 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:32:45.609695 kubelet[2681]: E0117 01:32:45.609282 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfbnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-crszt_calico-system(c16de311-2d09-4fff-8444-304a8ff3b2b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:45.611133 kubelet[2681]: E0117 01:32:45.611055 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:32:46.219487 systemd[1]: Started sshd@10-10.243.73.142:22-20.161.92.111:42326.service - OpenSSH per-connection server daemon (20.161.92.111:42326). Jan 17 01:32:46.676850 containerd[1507]: time="2026-01-17T01:32:46.675929611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 01:32:46.800874 sshd[5364]: Accepted publickey for core from 20.161.92.111 port 42326 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:32:46.803568 sshd[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:32:46.812505 systemd-logind[1489]: New session 13 of user core. Jan 17 01:32:46.821395 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 01:32:46.999833 containerd[1507]: time="2026-01-17T01:32:46.999413996Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:47.001158 containerd[1507]: time="2026-01-17T01:32:47.001000109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 01:32:47.001158 containerd[1507]: time="2026-01-17T01:32:47.001059241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 01:32:47.001567 kubelet[2681]: E0117 01:32:47.001489 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:32:47.002576 kubelet[2681]: E0117 01:32:47.001591 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:32:47.002576 kubelet[2681]: E0117 01:32:47.001840 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:184225ead8aa4834ae5f2781753a20a4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm6gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9598fc574-mrl8b_calico-system(93088e93-2e87-4fb7-ba1f-ee13328ea623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:47.005173 containerd[1507]: time="2026-01-17T01:32:47.004888085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 01:32:47.315479 containerd[1507]: time="2026-01-17T01:32:47.315272625Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:47.317837 containerd[1507]: time="2026-01-17T01:32:47.317334993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 01:32:47.317837 containerd[1507]: time="2026-01-17T01:32:47.317571540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 01:32:47.317986 kubelet[2681]: E0117 01:32:47.317864 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:32:47.317986 kubelet[2681]: E0117 01:32:47.317952 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:32:47.320130 kubelet[2681]: E0117 01:32:47.318171 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm6gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9598fc574-mrl8b_calico-system(93088e93-2e87-4fb7-ba1f-ee13328ea623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:47.320589 kubelet[2681]: E0117 01:32:47.320155 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9598fc574-mrl8b" podUID="93088e93-2e87-4fb7-ba1f-ee13328ea623" Jan 17 01:32:47.320256 sshd[5364]: pam_unix(sshd:session): session closed for user core Jan 17 01:32:47.328739 systemd[1]: sshd@10-10.243.73.142:22-20.161.92.111:42326.service: Deactivated successfully. Jan 17 01:32:47.332965 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 01:32:47.334771 systemd-logind[1489]: Session 13 logged out. Waiting for processes to exit. Jan 17 01:32:47.336566 systemd-logind[1489]: Removed session 13. Jan 17 01:32:48.674895 containerd[1507]: time="2026-01-17T01:32:48.674544909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 01:32:48.984152 containerd[1507]: time="2026-01-17T01:32:48.983838290Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:48.985426 containerd[1507]: time="2026-01-17T01:32:48.985300417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 01:32:48.985426 containerd[1507]: time="2026-01-17T01:32:48.985374872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 01:32:48.985703 kubelet[2681]: E0117 01:32:48.985638 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:32:48.986349 kubelet[2681]: E0117 01:32:48.985722 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:32:48.986349 kubelet[2681]: E0117 01:32:48.985906 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6kbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vdtbx_calico-system(a924e366-2268-4f4b-91a1-779a1cb6d303): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:48.987918 kubelet[2681]: E0117 01:32:48.987588 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:32:49.673488 containerd[1507]: time="2026-01-17T01:32:49.673271035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:32:49.981414 containerd[1507]: time="2026-01-17T01:32:49.981301707Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:32:49.982865 containerd[1507]: time="2026-01-17T01:32:49.982620943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:32:49.982865 containerd[1507]: time="2026-01-17T01:32:49.982748342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:32:49.983363 kubelet[2681]: E0117 01:32:49.983309 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:49.983457 kubelet[2681]: E0117 01:32:49.983376 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:32:49.984879 kubelet[2681]: E0117 01:32:49.983563 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c45pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594455f78-9rjtt_calico-apiserver(166a9b11-96f4-43ec-a822-357f748e3c20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:32:49.985290 kubelet[2681]: E0117 01:32:49.985015 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:32:52.424573 systemd[1]: Started sshd@11-10.243.73.142:22-20.161.92.111:36062.service - OpenSSH per-connection server daemon (20.161.92.111:36062). Jan 17 01:32:52.989833 sshd[5387]: Accepted publickey for core from 20.161.92.111 port 36062 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:32:52.992792 sshd[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:32:52.999355 systemd-logind[1489]: New session 14 of user core. Jan 17 01:32:53.004302 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 01:32:53.485172 sshd[5387]: pam_unix(sshd:session): session closed for user core Jan 17 01:32:53.490912 systemd-logind[1489]: Session 14 logged out. Waiting for processes to exit. Jan 17 01:32:53.492444 systemd[1]: sshd@11-10.243.73.142:22-20.161.92.111:36062.service: Deactivated successfully. Jan 17 01:32:53.495829 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 01:32:53.498179 systemd-logind[1489]: Removed session 14. Jan 17 01:32:57.673464 kubelet[2681]: E0117 01:32:57.673388 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:32:58.590515 systemd[1]: Started sshd@12-10.243.73.142:22-20.161.92.111:36070.service - OpenSSH per-connection server daemon (20.161.92.111:36070). Jan 17 01:32:58.676902 kubelet[2681]: E0117 01:32:58.676598 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9598fc574-mrl8b" podUID="93088e93-2e87-4fb7-ba1f-ee13328ea623" Jan 17 01:32:59.155041 sshd[5401]: Accepted publickey for core from 20.161.92.111 port 36070 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:32:59.157243 sshd[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:32:59.165004 systemd-logind[1489]: New session 15 of user core. Jan 17 01:32:59.174447 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 01:32:59.651642 sshd[5401]: pam_unix(sshd:session): session closed for user core Jan 17 01:32:59.656068 systemd-logind[1489]: Session 15 logged out. Waiting for processes to exit. Jan 17 01:32:59.656867 systemd[1]: sshd@12-10.243.73.142:22-20.161.92.111:36070.service: Deactivated successfully. Jan 17 01:32:59.659995 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 01:32:59.662860 systemd-logind[1489]: Removed session 15. Jan 17 01:32:59.754499 systemd[1]: Started sshd@13-10.243.73.142:22-20.161.92.111:36076.service - OpenSSH per-connection server daemon (20.161.92.111:36076). Jan 17 01:33:00.325605 sshd[5415]: Accepted publickey for core from 20.161.92.111 port 36076 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:00.330440 sshd[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:00.337485 systemd-logind[1489]: New session 16 of user core. Jan 17 01:33:00.345352 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 01:33:00.679656 kubelet[2681]: E0117 01:33:00.679613 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:33:00.681430 kubelet[2681]: E0117 01:33:00.680459 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:33:00.683863 kubelet[2681]: E0117 01:33:00.683545 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:33:00.955617 sshd[5415]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:00.961587 systemd[1]: sshd@13-10.243.73.142:22-20.161.92.111:36076.service: Deactivated successfully. Jan 17 01:33:00.964814 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 01:33:00.966001 systemd-logind[1489]: Session 16 logged out. Waiting for processes to exit. Jan 17 01:33:00.968429 systemd-logind[1489]: Removed session 16. Jan 17 01:33:01.061503 systemd[1]: Started sshd@14-10.243.73.142:22-20.161.92.111:36086.service - OpenSSH per-connection server daemon (20.161.92.111:36086). Jan 17 01:33:01.658002 sshd[5425]: Accepted publickey for core from 20.161.92.111 port 36086 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:01.660492 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:01.667063 systemd-logind[1489]: New session 17 of user core. Jan 17 01:33:01.679360 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 01:33:02.144074 sshd[5425]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:02.148245 systemd[1]: sshd@14-10.243.73.142:22-20.161.92.111:36086.service: Deactivated successfully. Jan 17 01:33:02.151298 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 01:33:02.153702 systemd-logind[1489]: Session 17 logged out. Waiting for processes to exit. Jan 17 01:33:02.155467 systemd-logind[1489]: Removed session 17. Jan 17 01:33:03.674480 kubelet[2681]: E0117 01:33:03.674393 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:33:05.176662 systemd[1]: run-containerd-runc-k8s.io-a039439259c9b76ab22fbccf8671bf924eb5c34a31000b8a951a5473fd269cb0-runc.E4Hvu8.mount: Deactivated successfully. Jan 17 01:33:07.254036 systemd[1]: Started sshd@15-10.243.73.142:22-20.161.92.111:57474.service - OpenSSH per-connection server daemon (20.161.92.111:57474). Jan 17 01:33:07.817728 sshd[5465]: Accepted publickey for core from 20.161.92.111 port 57474 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:07.820382 sshd[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:07.835720 systemd-logind[1489]: New session 18 of user core. Jan 17 01:33:07.841396 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 01:33:08.312439 sshd[5465]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:08.318292 systemd-logind[1489]: Session 18 logged out. Waiting for processes to exit. Jan 17 01:33:08.319337 systemd[1]: sshd@15-10.243.73.142:22-20.161.92.111:57474.service: Deactivated successfully. Jan 17 01:33:08.322966 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 01:33:08.324684 systemd-logind[1489]: Removed session 18. Jan 17 01:33:09.674486 kubelet[2681]: E0117 01:33:09.674349 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:33:11.676413 kubelet[2681]: E0117 01:33:11.676050 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9598fc574-mrl8b" podUID="93088e93-2e87-4fb7-ba1f-ee13328ea623" Jan 17 01:33:11.676413 kubelet[2681]: E0117 01:33:11.676251 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:33:12.674443 kubelet[2681]: E0117 01:33:12.673893 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:33:13.424527 systemd[1]: Started sshd@16-10.243.73.142:22-20.161.92.111:34668.service - OpenSSH per-connection server daemon (20.161.92.111:34668). Jan 17 01:33:13.990491 sshd[5479]: Accepted publickey for core from 20.161.92.111 port 34668 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:13.992863 sshd[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:14.000046 systemd-logind[1489]: New session 19 of user core. Jan 17 01:33:14.006356 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 01:33:14.485541 sshd[5479]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:14.491734 systemd[1]: sshd@16-10.243.73.142:22-20.161.92.111:34668.service: Deactivated successfully. Jan 17 01:33:14.494359 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 01:33:14.495368 systemd-logind[1489]: Session 19 logged out. Waiting for processes to exit. Jan 17 01:33:14.497259 systemd-logind[1489]: Removed session 19. Jan 17 01:33:15.675342 kubelet[2681]: E0117 01:33:15.675163 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:33:18.679238 kubelet[2681]: E0117 01:33:18.679151 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:33:19.594760 systemd[1]: Started sshd@17-10.243.73.142:22-20.161.92.111:34680.service - OpenSSH per-connection server daemon (20.161.92.111:34680). Jan 17 01:33:20.164735 sshd[5494]: Accepted publickey for core from 20.161.92.111 port 34680 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:20.167193 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:20.177919 systemd-logind[1489]: New session 20 of user core. Jan 17 01:33:20.183460 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 01:33:20.659487 sshd[5494]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:20.664970 systemd[1]: sshd@17-10.243.73.142:22-20.161.92.111:34680.service: Deactivated successfully. Jan 17 01:33:20.668599 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 01:33:20.669744 systemd-logind[1489]: Session 20 logged out. Waiting for processes to exit. Jan 17 01:33:20.673566 systemd-logind[1489]: Removed session 20. Jan 17 01:33:21.673860 kubelet[2681]: E0117 01:33:21.673733 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:33:22.676474 kubelet[2681]: E0117 01:33:22.676351 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:33:25.763547 systemd[1]: Started sshd@18-10.243.73.142:22-20.161.92.111:49698.service - OpenSSH per-connection server daemon (20.161.92.111:49698). Jan 17 01:33:26.341057 sshd[5507]: Accepted publickey for core from 20.161.92.111 port 49698 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:26.345210 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:26.356867 systemd-logind[1489]: New session 21 of user core. Jan 17 01:33:26.366494 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 01:33:26.682535 containerd[1507]: time="2026-01-17T01:33:26.681954430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:33:26.685037 kubelet[2681]: E0117 01:33:26.684887 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9598fc574-mrl8b" podUID="93088e93-2e87-4fb7-ba1f-ee13328ea623" Jan 17 01:33:26.878865 sshd[5507]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:26.887401 systemd[1]: sshd@18-10.243.73.142:22-20.161.92.111:49698.service: Deactivated successfully. Jan 17 01:33:26.891267 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 01:33:26.892743 systemd-logind[1489]: Session 21 logged out. Waiting for processes to exit. Jan 17 01:33:26.895308 systemd-logind[1489]: Removed session 21. Jan 17 01:33:26.990655 systemd[1]: Started sshd@19-10.243.73.142:22-20.161.92.111:49708.service - OpenSSH per-connection server daemon (20.161.92.111:49708). Jan 17 01:33:27.010169 containerd[1507]: time="2026-01-17T01:33:27.009663957Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:33:27.011969 containerd[1507]: time="2026-01-17T01:33:27.011786208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:33:27.012059 containerd[1507]: time="2026-01-17T01:33:27.011795190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:33:27.014799 kubelet[2681]: E0117 01:33:27.012568 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:33:27.014799 kubelet[2681]: E0117 01:33:27.012668 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:33:27.014799 kubelet[2681]: E0117 01:33:27.012979 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6rcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594455f78-c92j5_calico-apiserver(307224a1-2fa9-44a1-ad77-684cc2300054): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:33:27.015458 kubelet[2681]: E0117 01:33:27.015320 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:33:27.554842 sshd[5520]: Accepted publickey for core from 20.161.92.111 port 49708 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:27.557369 sshd[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:27.565816 systemd-logind[1489]: New session 22 of user core. Jan 17 01:33:27.571398 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 01:33:27.673639 kubelet[2681]: E0117 01:33:27.673576 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:33:28.312060 sshd[5520]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:28.321153 systemd[1]: sshd@19-10.243.73.142:22-20.161.92.111:49708.service: Deactivated successfully. Jan 17 01:33:28.325348 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 01:33:28.326852 systemd-logind[1489]: Session 22 logged out. Waiting for processes to exit. Jan 17 01:33:28.328496 systemd-logind[1489]: Removed session 22. Jan 17 01:33:28.412525 systemd[1]: Started sshd@20-10.243.73.142:22-20.161.92.111:49718.service - OpenSSH per-connection server daemon (20.161.92.111:49718). Jan 17 01:33:29.005223 sshd[5531]: Accepted publickey for core from 20.161.92.111 port 49718 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:29.007425 sshd[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:29.014532 systemd-logind[1489]: New session 23 of user core. Jan 17 01:33:29.018372 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 01:33:30.226098 sshd[5531]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:30.237332 systemd[1]: sshd@20-10.243.73.142:22-20.161.92.111:49718.service: Deactivated successfully. Jan 17 01:33:30.239775 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 01:33:30.240940 systemd-logind[1489]: Session 23 logged out. Waiting for processes to exit. Jan 17 01:33:30.243358 systemd-logind[1489]: Removed session 23. Jan 17 01:33:30.332532 systemd[1]: Started sshd@21-10.243.73.142:22-20.161.92.111:49722.service - OpenSSH per-connection server daemon (20.161.92.111:49722). Jan 17 01:33:30.675248 containerd[1507]: time="2026-01-17T01:33:30.674355377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 01:33:30.912626 sshd[5557]: Accepted publickey for core from 20.161.92.111 port 49722 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:30.915407 sshd[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:30.923375 systemd-logind[1489]: New session 24 of user core. Jan 17 01:33:30.929357 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 01:33:30.991309 containerd[1507]: time="2026-01-17T01:33:30.991229478Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:33:30.992887 containerd[1507]: time="2026-01-17T01:33:30.992829572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 01:33:30.993272 containerd[1507]: time="2026-01-17T01:33:30.992976494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 01:33:30.993400 kubelet[2681]: E0117 01:33:30.993307 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:33:30.994053 kubelet[2681]: E0117 01:33:30.993401 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:33:30.994053 kubelet[2681]: E0117 01:33:30.993658 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6kbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vdtbx_calico-system(a924e366-2268-4f4b-91a1-779a1cb6d303): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 01:33:30.995165 kubelet[2681]: E0117 01:33:30.994948 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:33:31.711640 sshd[5557]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:31.716280 systemd[1]: sshd@21-10.243.73.142:22-20.161.92.111:49722.service: Deactivated successfully. Jan 17 01:33:31.720218 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 01:33:31.722603 systemd-logind[1489]: Session 24 logged out. Waiting for processes to exit. Jan 17 01:33:31.724203 systemd-logind[1489]: Removed session 24. Jan 17 01:33:31.817551 systemd[1]: Started sshd@22-10.243.73.142:22-20.161.92.111:49730.service - OpenSSH per-connection server daemon (20.161.92.111:49730). Jan 17 01:33:32.399615 sshd[5567]: Accepted publickey for core from 20.161.92.111 port 49730 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:32.402294 sshd[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:32.410215 systemd-logind[1489]: New session 25 of user core. Jan 17 01:33:32.417528 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 01:33:32.959096 sshd[5567]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:32.966670 systemd[1]: sshd@22-10.243.73.142:22-20.161.92.111:49730.service: Deactivated successfully. Jan 17 01:33:32.970247 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 01:33:32.971635 systemd-logind[1489]: Session 25 logged out. Waiting for processes to exit. Jan 17 01:33:32.975880 systemd-logind[1489]: Removed session 25. Jan 17 01:33:34.674972 containerd[1507]: time="2026-01-17T01:33:34.674845849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 01:33:34.987159 containerd[1507]: time="2026-01-17T01:33:34.987048198Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:33:34.988808 containerd[1507]: time="2026-01-17T01:33:34.988366155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 01:33:34.988808 containerd[1507]: time="2026-01-17T01:33:34.988441991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 01:33:34.989005 kubelet[2681]: E0117 01:33:34.988773 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:33:34.989005 kubelet[2681]: E0117 01:33:34.988909 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:33:34.989970 kubelet[2681]: E0117 01:33:34.989145 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfbnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-crszt_calico-system(c16de311-2d09-4fff-8444-304a8ff3b2b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 01:33:34.993356 containerd[1507]: time="2026-01-17T01:33:34.992562868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 01:33:35.134264 systemd[1]: run-containerd-runc-k8s.io-a039439259c9b76ab22fbccf8671bf924eb5c34a31000b8a951a5473fd269cb0-runc.XxZB21.mount: Deactivated successfully. Jan 17 01:33:35.304268 containerd[1507]: time="2026-01-17T01:33:35.304040033Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:33:35.306200 containerd[1507]: time="2026-01-17T01:33:35.306125491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 01:33:35.306200 containerd[1507]: time="2026-01-17T01:33:35.306161854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 01:33:35.306814 kubelet[2681]: E0117 01:33:35.306723 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:33:35.306930 kubelet[2681]: E0117 01:33:35.306821 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:33:35.307086 kubelet[2681]: E0117 01:33:35.307009 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfbnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-crszt_calico-system(c16de311-2d09-4fff-8444-304a8ff3b2b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 01:33:35.308556 kubelet[2681]: E0117 01:33:35.308464 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:33:35.673987 containerd[1507]: time="2026-01-17T01:33:35.673813728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 01:33:35.981473 containerd[1507]: time="2026-01-17T01:33:35.981267849Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:33:35.982798 containerd[1507]: time="2026-01-17T01:33:35.982718545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 01:33:35.982979 containerd[1507]: time="2026-01-17T01:33:35.982824873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 01:33:35.983125 kubelet[2681]: E0117 01:33:35.983041 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:33:35.984810 kubelet[2681]: E0117 01:33:35.983146 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:33:35.984810 kubelet[2681]: E0117 01:33:35.983352 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qz5x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c7d5775b-t6c4g_calico-system(50610d67-f39f-4c35-8e6d-c6596bfafc13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 01:33:35.984810 kubelet[2681]: E0117 01:33:35.984545 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:33:38.066581 systemd[1]: Started sshd@23-10.243.73.142:22-20.161.92.111:37116.service - OpenSSH per-connection server daemon (20.161.92.111:37116). Jan 17 01:33:38.632926 sshd[5601]: Accepted publickey for core from 20.161.92.111 port 37116 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:38.635747 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:38.644333 systemd-logind[1489]: New session 26 of user core. Jan 17 01:33:38.651301 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 01:33:39.124905 sshd[5601]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:39.130773 systemd-logind[1489]: Session 26 logged out. Waiting for processes to exit. Jan 17 01:33:39.131490 systemd[1]: sshd@23-10.243.73.142:22-20.161.92.111:37116.service: Deactivated successfully. Jan 17 01:33:39.137652 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 01:33:39.139493 systemd-logind[1489]: Removed session 26. Jan 17 01:33:39.674608 kubelet[2681]: E0117 01:33:39.674485 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:33:39.676655 containerd[1507]: time="2026-01-17T01:33:39.676330765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 01:33:39.990812 containerd[1507]: time="2026-01-17T01:33:39.990318736Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:33:39.997981 containerd[1507]: time="2026-01-17T01:33:39.997765750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 01:33:39.997981 containerd[1507]: time="2026-01-17T01:33:39.997898206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 01:33:39.998349 kubelet[2681]: E0117 01:33:39.998263 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:33:39.998484 kubelet[2681]: E0117 01:33:39.998364 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:33:39.999199 kubelet[2681]: E0117 01:33:39.998534 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:184225ead8aa4834ae5f2781753a20a4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm6gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9598fc574-mrl8b_calico-system(93088e93-2e87-4fb7-ba1f-ee13328ea623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 01:33:40.001522 containerd[1507]: time="2026-01-17T01:33:40.001493825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 01:33:40.310566 containerd[1507]: time="2026-01-17T01:33:40.310181721Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:33:40.311967 containerd[1507]: time="2026-01-17T01:33:40.311671715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 01:33:40.311967 containerd[1507]: time="2026-01-17T01:33:40.311716586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 01:33:40.312167 kubelet[2681]: E0117 01:33:40.312046 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:33:40.312255 kubelet[2681]: E0117 01:33:40.312157 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:33:40.312502 kubelet[2681]: E0117 01:33:40.312373 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm6gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9598fc574-mrl8b_calico-system(93088e93-2e87-4fb7-ba1f-ee13328ea623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 01:33:40.314238 kubelet[2681]: E0117 01:33:40.314185 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9598fc574-mrl8b" podUID="93088e93-2e87-4fb7-ba1f-ee13328ea623" Jan 17 01:33:41.674952 kubelet[2681]: E0117 01:33:41.674706 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:33:41.678834 containerd[1507]: time="2026-01-17T01:33:41.678360352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:33:41.986459 containerd[1507]: time="2026-01-17T01:33:41.986133987Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:33:41.988125 containerd[1507]: time="2026-01-17T01:33:41.987399095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:33:41.988540 containerd[1507]: time="2026-01-17T01:33:41.988287946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:33:41.989891 kubelet[2681]: E0117 01:33:41.988921 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:33:41.989891 kubelet[2681]: E0117 01:33:41.989040 2681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:33:41.989891 kubelet[2681]: E0117 01:33:41.989250 2681 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c45pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594455f78-9rjtt_calico-apiserver(166a9b11-96f4-43ec-a822-357f748e3c20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:33:41.991349 kubelet[2681]: E0117 01:33:41.991300 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:33:44.237295 systemd[1]: Started sshd@24-10.243.73.142:22-20.161.92.111:37300.service - OpenSSH per-connection server daemon (20.161.92.111:37300). Jan 17 01:33:44.846561 sshd[5636]: Accepted publickey for core from 20.161.92.111 port 37300 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:44.850376 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:44.861293 systemd-logind[1489]: New session 27 of user core. Jan 17 01:33:44.866362 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 01:33:45.405027 sshd[5636]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:45.411359 systemd-logind[1489]: Session 27 logged out. Waiting for processes to exit. Jan 17 01:33:45.412730 systemd[1]: sshd@24-10.243.73.142:22-20.161.92.111:37300.service: Deactivated successfully. Jan 17 01:33:45.417532 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 01:33:45.420464 systemd-logind[1489]: Removed session 27. Jan 17 01:33:48.673876 kubelet[2681]: E0117 01:33:48.673433 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13" Jan 17 01:33:49.676443 kubelet[2681]: E0117 01:33:49.676212 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-crszt" podUID="c16de311-2d09-4fff-8444-304a8ff3b2b5" Jan 17 01:33:50.511525 systemd[1]: Started sshd@25-10.243.73.142:22-20.161.92.111:37310.service - OpenSSH per-connection server daemon (20.161.92.111:37310). Jan 17 01:33:50.679319 kubelet[2681]: E0117 01:33:50.678696 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-c92j5" podUID="307224a1-2fa9-44a1-ad77-684cc2300054" Jan 17 01:33:51.104904 sshd[5651]: Accepted publickey for core from 20.161.92.111 port 37310 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:51.109791 sshd[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:51.121346 systemd-logind[1489]: New session 28 of user core. Jan 17 01:33:51.130349 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 01:33:51.741920 sshd[5651]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:51.748959 systemd[1]: sshd@25-10.243.73.142:22-20.161.92.111:37310.service: Deactivated successfully. Jan 17 01:33:51.754263 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 01:33:51.758035 systemd-logind[1489]: Session 28 logged out. Waiting for processes to exit. Jan 17 01:33:51.760624 systemd-logind[1489]: Removed session 28. Jan 17 01:33:53.674442 kubelet[2681]: E0117 01:33:53.674306 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9598fc574-mrl8b" podUID="93088e93-2e87-4fb7-ba1f-ee13328ea623" Jan 17 01:33:54.675353 kubelet[2681]: E0117 01:33:54.675292 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594455f78-9rjtt" podUID="166a9b11-96f4-43ec-a822-357f748e3c20" Jan 17 01:33:55.675230 kubelet[2681]: E0117 01:33:55.675160 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vdtbx" podUID="a924e366-2268-4f4b-91a1-779a1cb6d303" Jan 17 01:33:56.848417 systemd[1]: Started sshd@26-10.243.73.142:22-20.161.92.111:37254.service - OpenSSH per-connection server daemon (20.161.92.111:37254). Jan 17 01:33:57.451882 sshd[5664]: Accepted publickey for core from 20.161.92.111 port 37254 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:33:57.454818 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:33:57.465053 systemd-logind[1489]: New session 29 of user core. Jan 17 01:33:57.471333 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 01:33:58.145400 sshd[5664]: pam_unix(sshd:session): session closed for user core Jan 17 01:33:58.149902 systemd-logind[1489]: Session 29 logged out. Waiting for processes to exit. Jan 17 01:33:58.151206 systemd[1]: sshd@26-10.243.73.142:22-20.161.92.111:37254.service: Deactivated successfully. Jan 17 01:33:58.156258 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 01:33:58.160145 systemd-logind[1489]: Removed session 29. Jan 17 01:34:00.675754 kubelet[2681]: E0117 01:34:00.675201 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c7d5775b-t6c4g" podUID="50610d67-f39f-4c35-8e6d-c6596bfafc13"