Jan 24 03:07:29.036622 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 03:07:29.036661 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 03:07:29.036675 kernel: BIOS-provided physical RAM map: Jan 24 03:07:29.036692 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 03:07:29.036702 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 03:07:29.036713 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 03:07:29.036725 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 24 03:07:29.036736 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 24 03:07:29.036747 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 03:07:29.036757 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 03:07:29.036768 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 03:07:29.036778 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 03:07:29.036794 kernel: NX (Execute Disable) protection: active Jan 24 03:07:29.036806 kernel: APIC: Static calls initialized Jan 24 03:07:29.036818 kernel: SMBIOS 2.8 present. Jan 24 03:07:29.036831 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 24 03:07:29.036842 kernel: Hypervisor detected: KVM Jan 24 03:07:29.036859 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 03:07:29.036871 kernel: kvm-clock: using sched offset of 4529354583 cycles Jan 24 03:07:29.036884 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 03:07:29.036896 kernel: tsc: Detected 2499.998 MHz processor Jan 24 03:07:29.036908 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 03:07:29.036920 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 03:07:29.036932 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 24 03:07:29.036944 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 03:07:29.036956 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 03:07:29.036973 kernel: Using GB pages for direct mapping Jan 24 03:07:29.036985 kernel: ACPI: Early table checksum verification disabled Jan 24 03:07:29.036997 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 24 03:07:29.037009 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 03:07:29.037021 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 03:07:29.037033 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 03:07:29.037044 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 24 03:07:29.037056 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 03:07:29.037068 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 03:07:29.037085 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 03:07:29.037097 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 03:07:29.037109 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 24 03:07:29.037121 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 24 03:07:29.037133 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 24 03:07:29.037152 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 24 03:07:29.037164 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 24 03:07:29.037181 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 24 03:07:29.037194 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 24 03:07:29.037206 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 03:07:29.037219 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 03:07:29.037231 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 24 03:07:29.037244 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 24 03:07:29.037256 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 24 03:07:29.037273 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 24 03:07:29.037314 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 24 03:07:29.037327 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 24 03:07:29.037339 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 24 03:07:29.037352 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 24 03:07:29.037364 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 24 03:07:29.037376 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 24 03:07:29.037388 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 24 03:07:29.037400 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 24 03:07:29.037413 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 24 03:07:29.037432 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 24 03:07:29.037444 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 24 03:07:29.037456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 24 03:07:29.037469 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 24 03:07:29.037482 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 24 03:07:29.037494 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 24 03:07:29.037507 kernel: Zone ranges: Jan 24 03:07:29.037519 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 03:07:29.037532 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 24 03:07:29.037550 kernel: Normal empty Jan 24 03:07:29.037562 kernel: Movable zone start for each node Jan 24 03:07:29.037575 kernel: Early memory node ranges Jan 24 03:07:29.037587 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 03:07:29.037599 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 24 03:07:29.037612 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 24 03:07:29.037624 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 03:07:29.037636 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 03:07:29.037649 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 24 03:07:29.037661 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 03:07:29.037679 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 03:07:29.037692 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 03:07:29.037704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 03:07:29.037717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 03:07:29.037729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 03:07:29.037742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 03:07:29.037754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 03:07:29.037767 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 03:07:29.037779 kernel: TSC deadline timer available Jan 24 03:07:29.037797 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 24 03:07:29.037810 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 03:07:29.037822 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 03:07:29.037834 kernel: Booting paravirtualized kernel on KVM Jan 24 03:07:29.037847 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 03:07:29.037860 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 24 03:07:29.037873 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 24 03:07:29.037885 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 24 03:07:29.037898 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 24 03:07:29.037915 kernel: kvm-guest: PV spinlocks enabled Jan 24 03:07:29.037928 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 03:07:29.037942 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 03:07:29.037955 kernel: random: crng init done Jan 24 03:07:29.037968 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 03:07:29.037980 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 03:07:29.037993 kernel: Fallback order for Node 0: 0 Jan 24 03:07:29.038005 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 24 03:07:29.038023 kernel: Policy zone: DMA32 Jan 24 03:07:29.038036 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 03:07:29.038048 kernel: software IO TLB: area num 16. Jan 24 03:07:29.038061 kernel: Memory: 1901596K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194760K reserved, 0K cma-reserved) Jan 24 03:07:29.038074 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 24 03:07:29.038087 kernel: Kernel/User page tables isolation: enabled Jan 24 03:07:29.038099 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 03:07:29.038111 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 03:07:29.038124 kernel: Dynamic Preempt: voluntary Jan 24 03:07:29.038141 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 03:07:29.038155 kernel: rcu: RCU event tracing is enabled. Jan 24 03:07:29.038167 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 24 03:07:29.038180 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 03:07:29.038193 kernel: Rude variant of Tasks RCU enabled. Jan 24 03:07:29.038218 kernel: Tracing variant of Tasks RCU enabled. Jan 24 03:07:29.038236 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 03:07:29.038250 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 24 03:07:29.038263 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 24 03:07:29.038299 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 03:07:29.038315 kernel: Console: colour VGA+ 80x25 Jan 24 03:07:29.038329 kernel: printk: console [tty0] enabled Jan 24 03:07:29.038349 kernel: printk: console [ttyS0] enabled Jan 24 03:07:29.038362 kernel: ACPI: Core revision 20230628 Jan 24 03:07:29.038375 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 03:07:29.038388 kernel: x2apic enabled Jan 24 03:07:29.038402 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 03:07:29.038420 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 24 03:07:29.038434 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 24 03:07:29.038447 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 03:07:29.038461 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 24 03:07:29.038474 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 24 03:07:29.038487 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 03:07:29.038499 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 03:07:29.038512 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 03:07:29.038526 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 24 03:07:29.038539 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 03:07:29.038557 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 03:07:29.038570 kernel: MDS: Mitigation: Clear CPU buffers Jan 24 03:07:29.038583 kernel: MMIO Stale Data: Unknown: No mitigations Jan 24 03:07:29.038596 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 24 03:07:29.038609 kernel: active return thunk: its_return_thunk Jan 24 03:07:29.038622 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 03:07:29.038635 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 03:07:29.038648 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 03:07:29.038661 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 03:07:29.038674 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 03:07:29.038687 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 24 03:07:29.038706 kernel: Freeing SMP alternatives memory: 32K Jan 24 03:07:29.038719 kernel: pid_max: default: 32768 minimum: 301 Jan 24 03:07:29.038732 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 03:07:29.038745 kernel: landlock: Up and running. Jan 24 03:07:29.038758 kernel: SELinux: Initializing. Jan 24 03:07:29.038771 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 03:07:29.038784 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 03:07:29.038797 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 24 03:07:29.038810 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 03:07:29.038824 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 03:07:29.038842 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 03:07:29.038856 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 24 03:07:29.038869 kernel: signal: max sigframe size: 1776 Jan 24 03:07:29.038882 kernel: rcu: Hierarchical SRCU implementation. Jan 24 03:07:29.038895 kernel: rcu: Max phase no-delay instances is 400. Jan 24 03:07:29.038909 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 03:07:29.038922 kernel: smp: Bringing up secondary CPUs ... Jan 24 03:07:29.038935 kernel: smpboot: x86: Booting SMP configuration: Jan 24 03:07:29.038948 kernel: .... node #0, CPUs: #1 Jan 24 03:07:29.038966 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 24 03:07:29.038979 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 03:07:29.038992 kernel: smpboot: Max logical packages: 16 Jan 24 03:07:29.039005 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 24 03:07:29.039018 kernel: devtmpfs: initialized Jan 24 03:07:29.039031 kernel: x86/mm: Memory block size: 128MB Jan 24 03:07:29.039045 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 03:07:29.039058 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 24 03:07:29.039071 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 03:07:29.041326 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 03:07:29.041351 kernel: audit: initializing netlink subsys (disabled) Jan 24 03:07:29.041365 kernel: audit: type=2000 audit(1769224048.027:1): state=initialized audit_enabled=0 res=1 Jan 24 03:07:29.041378 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 03:07:29.041392 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 03:07:29.041405 kernel: cpuidle: using governor menu Jan 24 03:07:29.041418 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 03:07:29.041431 kernel: dca service started, version 1.12.1 Jan 24 03:07:29.041444 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 03:07:29.041464 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 03:07:29.041477 kernel: PCI: Using configuration type 1 for base access Jan 24 03:07:29.041491 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 03:07:29.041504 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 03:07:29.041517 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 03:07:29.041531 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 03:07:29.041544 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 03:07:29.041557 kernel: ACPI: Added _OSI(Module Device) Jan 24 03:07:29.041570 kernel: ACPI: Added _OSI(Processor Device) Jan 24 03:07:29.041588 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 03:07:29.041602 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 03:07:29.041615 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 03:07:29.041628 kernel: ACPI: Interpreter enabled Jan 24 03:07:29.041641 kernel: ACPI: PM: (supports S0 S5) Jan 24 03:07:29.041654 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 03:07:29.041667 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 03:07:29.041680 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 03:07:29.041694 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 03:07:29.041707 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 03:07:29.042023 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 03:07:29.042211 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 24 03:07:29.042414 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 24 03:07:29.042435 kernel: PCI host bridge to bus 0000:00 Jan 24 03:07:29.042631 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 03:07:29.042794 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 03:07:29.042976 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 03:07:29.043128 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 24 03:07:29.044385 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 03:07:29.044588 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 24 03:07:29.044748 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 03:07:29.044987 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 03:07:29.045199 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 24 03:07:29.045471 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 24 03:07:29.045651 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 24 03:07:29.045825 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 24 03:07:29.045998 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 03:07:29.046204 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 24 03:07:29.049454 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 24 03:07:29.049682 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 24 03:07:29.049867 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 24 03:07:29.050061 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 24 03:07:29.050233 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 24 03:07:29.050488 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 24 03:07:29.050658 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 24 03:07:29.050861 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 24 03:07:29.051028 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 24 03:07:29.051207 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 24 03:07:29.051724 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 24 03:07:29.051910 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 24 03:07:29.052084 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 24 03:07:29.054378 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 24 03:07:29.054582 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 24 03:07:29.054772 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 24 03:07:29.054945 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 03:07:29.055115 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 24 03:07:29.058350 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 24 03:07:29.058563 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 24 03:07:29.058784 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 24 03:07:29.058959 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 03:07:29.059141 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 24 03:07:29.059356 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 24 03:07:29.059543 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 03:07:29.059730 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 03:07:29.059926 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 03:07:29.060108 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 24 03:07:29.060320 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 24 03:07:29.060985 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 03:07:29.061165 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 03:07:29.063489 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 24 03:07:29.063680 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 24 03:07:29.063867 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 03:07:29.064038 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 24 03:07:29.064205 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 03:07:29.066452 kernel: pci_bus 0000:02: extended config space not accessible Jan 24 03:07:29.066661 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 24 03:07:29.066849 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 24 03:07:29.067037 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 03:07:29.067223 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 24 03:07:29.067467 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 24 03:07:29.067662 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 24 03:07:29.067844 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 03:07:29.068013 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 24 03:07:29.068183 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 03:07:29.070469 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 24 03:07:29.070671 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 24 03:07:29.070847 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 03:07:29.071015 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 24 03:07:29.071181 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 03:07:29.071393 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 03:07:29.071563 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 24 03:07:29.071728 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 03:07:29.071931 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 03:07:29.072100 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 24 03:07:29.072268 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 03:07:29.072599 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 03:07:29.072768 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 24 03:07:29.072936 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 03:07:29.073127 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 03:07:29.073325 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 24 03:07:29.073505 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 03:07:29.073677 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 03:07:29.073856 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 24 03:07:29.074033 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 03:07:29.074053 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 03:07:29.074068 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 03:07:29.074081 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 03:07:29.074095 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 03:07:29.074108 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 03:07:29.074129 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 03:07:29.074143 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 03:07:29.074157 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 03:07:29.074170 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 03:07:29.074184 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 03:07:29.074197 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 03:07:29.074211 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 03:07:29.074224 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 03:07:29.074237 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 03:07:29.074256 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 03:07:29.074269 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 03:07:29.076406 kernel: iommu: Default domain type: Translated Jan 24 03:07:29.076434 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 03:07:29.076449 kernel: PCI: Using ACPI for IRQ routing Jan 24 03:07:29.076474 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 03:07:29.076488 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 03:07:29.076501 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 24 03:07:29.076811 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 03:07:29.077023 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 03:07:29.077197 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 03:07:29.077217 kernel: vgaarb: loaded Jan 24 03:07:29.077232 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 03:07:29.077245 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 03:07:29.077258 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 03:07:29.077272 kernel: pnp: PnP ACPI init Jan 24 03:07:29.079503 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 03:07:29.079534 kernel: pnp: PnP ACPI: found 5 devices Jan 24 03:07:29.079548 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 03:07:29.079562 kernel: NET: Registered PF_INET protocol family Jan 24 03:07:29.079576 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 03:07:29.079589 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 24 03:07:29.079603 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 03:07:29.079616 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 03:07:29.079629 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 24 03:07:29.079648 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 24 03:07:29.079662 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 03:07:29.079676 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 03:07:29.079689 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 03:07:29.079703 kernel: NET: Registered PF_XDP protocol family Jan 24 03:07:29.079873 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 24 03:07:29.080045 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 24 03:07:29.080214 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 24 03:07:29.080441 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 24 03:07:29.080614 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 24 03:07:29.080782 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 24 03:07:29.080950 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 24 03:07:29.081119 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 24 03:07:29.081310 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 24 03:07:29.081496 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 24 03:07:29.081686 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 24 03:07:29.081853 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 24 03:07:29.082019 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 24 03:07:29.082182 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 24 03:07:29.084488 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 24 03:07:29.084703 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 24 03:07:29.084895 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 03:07:29.085112 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 24 03:07:29.085339 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 03:07:29.085515 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 24 03:07:29.085685 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 24 03:07:29.085853 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 03:07:29.086023 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 03:07:29.086191 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 24 03:07:29.086403 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 24 03:07:29.086573 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 03:07:29.086756 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 03:07:29.086923 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 24 03:07:29.087090 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 24 03:07:29.089368 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 03:07:29.089558 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 03:07:29.089742 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 24 03:07:29.089913 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 24 03:07:29.090083 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 03:07:29.090254 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 03:07:29.090470 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 24 03:07:29.090638 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 24 03:07:29.090805 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 03:07:29.090984 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 03:07:29.091150 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 24 03:07:29.093378 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 24 03:07:29.093561 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 03:07:29.093734 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 03:07:29.093905 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 24 03:07:29.094075 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 24 03:07:29.094254 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 03:07:29.094485 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 03:07:29.094652 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 24 03:07:29.094818 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 24 03:07:29.094984 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 03:07:29.095145 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 03:07:29.095333 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 03:07:29.095486 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 03:07:29.095637 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 24 03:07:29.095795 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 03:07:29.095943 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 24 03:07:29.096119 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 24 03:07:29.098375 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 24 03:07:29.098552 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 03:07:29.098729 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 24 03:07:29.098903 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 24 03:07:29.099072 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 24 03:07:29.099249 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 03:07:29.104099 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 24 03:07:29.104273 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 24 03:07:29.104475 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 03:07:29.104645 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 24 03:07:29.104815 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 24 03:07:29.104971 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 03:07:29.105153 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 24 03:07:29.105352 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 24 03:07:29.105511 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 03:07:29.105684 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 24 03:07:29.105842 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 24 03:07:29.106008 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 03:07:29.106193 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 24 03:07:29.106384 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 24 03:07:29.106546 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 03:07:29.106718 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 24 03:07:29.106877 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 24 03:07:29.107049 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 03:07:29.107078 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 03:07:29.107093 kernel: PCI: CLS 0 bytes, default 64 Jan 24 03:07:29.107108 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 03:07:29.107122 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 24 03:07:29.107136 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 03:07:29.107151 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 24 03:07:29.107166 kernel: Initialise system trusted keyrings Jan 24 03:07:29.107181 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 24 03:07:29.107200 kernel: Key type asymmetric registered Jan 24 03:07:29.107214 kernel: Asymmetric key parser 'x509' registered Jan 24 03:07:29.107228 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 03:07:29.107242 kernel: io scheduler mq-deadline registered Jan 24 03:07:29.107257 kernel: io scheduler kyber registered Jan 24 03:07:29.107271 kernel: io scheduler bfq registered Jan 24 03:07:29.107490 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 24 03:07:29.107679 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 24 03:07:29.107875 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 03:07:29.108072 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 24 03:07:29.108255 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 24 03:07:29.109579 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 03:07:29.109754 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 24 03:07:29.109924 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 24 03:07:29.110091 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 03:07:29.110273 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 24 03:07:29.111559 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 24 03:07:29.111732 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 03:07:29.111905 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 24 03:07:29.112073 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 24 03:07:29.112242 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 03:07:29.114770 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 24 03:07:29.114942 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 24 03:07:29.115110 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 03:07:29.115327 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 24 03:07:29.115501 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 24 03:07:29.115668 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 03:07:29.115849 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 24 03:07:29.116017 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 24 03:07:29.116193 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 03:07:29.116215 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 03:07:29.116230 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 03:07:29.116245 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 03:07:29.116259 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 03:07:29.116305 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 03:07:29.116321 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 03:07:29.116335 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 03:07:29.116349 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 03:07:29.116364 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 03:07:29.116546 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 03:07:29.116718 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 03:07:29.116888 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T03:07:28 UTC (1769224048) Jan 24 03:07:29.117056 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 24 03:07:29.117076 kernel: intel_pstate: CPU model not supported Jan 24 03:07:29.117091 kernel: NET: Registered PF_INET6 protocol family Jan 24 03:07:29.117105 kernel: Segment Routing with IPv6 Jan 24 03:07:29.117119 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 03:07:29.117133 kernel: NET: Registered PF_PACKET protocol family Jan 24 03:07:29.117147 kernel: Key type dns_resolver registered Jan 24 03:07:29.117161 kernel: IPI shorthand broadcast: enabled Jan 24 03:07:29.117175 kernel: sched_clock: Marking stable (1276014768, 230130319)->(1634929991, -128784904) Jan 24 03:07:29.117197 kernel: registered taskstats version 1 Jan 24 03:07:29.117211 kernel: Loading compiled-in X.509 certificates Jan 24 03:07:29.117226 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 03:07:29.117239 kernel: Key type .fscrypt registered Jan 24 03:07:29.117253 kernel: Key type fscrypt-provisioning registered Jan 24 03:07:29.117267 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 03:07:29.119322 kernel: ima: Allocated hash algorithm: sha1 Jan 24 03:07:29.119340 kernel: ima: No architecture policies found Jan 24 03:07:29.119354 kernel: clk: Disabling unused clocks Jan 24 03:07:29.119377 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 03:07:29.119391 kernel: Write protecting the kernel read-only data: 36864k Jan 24 03:07:29.119406 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 03:07:29.119419 kernel: Run /init as init process Jan 24 03:07:29.119433 kernel: with arguments: Jan 24 03:07:29.119448 kernel: /init Jan 24 03:07:29.119461 kernel: with environment: Jan 24 03:07:29.119475 kernel: HOME=/ Jan 24 03:07:29.119489 kernel: TERM=linux Jan 24 03:07:29.119515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 03:07:29.119533 systemd[1]: Detected virtualization kvm. Jan 24 03:07:29.119548 systemd[1]: Detected architecture x86-64. Jan 24 03:07:29.119563 systemd[1]: Running in initrd. Jan 24 03:07:29.119577 systemd[1]: No hostname configured, using default hostname. Jan 24 03:07:29.119592 systemd[1]: Hostname set to . Jan 24 03:07:29.119607 systemd[1]: Initializing machine ID from VM UUID. Jan 24 03:07:29.119627 systemd[1]: Queued start job for default target initrd.target. Jan 24 03:07:29.119642 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 03:07:29.119657 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 03:07:29.119673 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 03:07:29.119688 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 03:07:29.119703 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 03:07:29.119718 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 03:07:29.119741 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 03:07:29.119756 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 03:07:29.119771 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 03:07:29.119786 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 03:07:29.119801 systemd[1]: Reached target paths.target - Path Units. Jan 24 03:07:29.119822 systemd[1]: Reached target slices.target - Slice Units. Jan 24 03:07:29.119837 systemd[1]: Reached target swap.target - Swaps. Jan 24 03:07:29.119852 systemd[1]: Reached target timers.target - Timer Units. Jan 24 03:07:29.119872 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 03:07:29.119887 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 03:07:29.119902 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 03:07:29.119917 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 03:07:29.119932 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 03:07:29.119947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 03:07:29.119962 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 03:07:29.119977 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 03:07:29.119991 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 03:07:29.120012 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 03:07:29.120027 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 03:07:29.120042 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 03:07:29.120056 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 03:07:29.120071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 03:07:29.120086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 03:07:29.120155 systemd-journald[203]: Collecting audit messages is disabled. Jan 24 03:07:29.120195 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 03:07:29.120211 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 03:07:29.120226 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 03:07:29.120247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 03:07:29.120264 systemd-journald[203]: Journal started Jan 24 03:07:29.122364 systemd-journald[203]: Runtime Journal (/run/log/journal/894bf4abc07c4305ba98d65e70b6eb0b) is 4.7M, max 38.0M, 33.2M free. Jan 24 03:07:29.070401 systemd-modules-load[204]: Inserted module 'overlay' Jan 24 03:07:29.148128 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 03:07:29.148163 kernel: Bridge firewalling registered Jan 24 03:07:29.128897 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 24 03:07:29.153934 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 03:07:29.153977 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 03:07:29.155006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 03:07:29.162474 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 03:07:29.169520 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 03:07:29.181741 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 03:07:29.183048 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 03:07:29.195379 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 03:07:29.212725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 03:07:29.215807 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 03:07:29.225502 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 03:07:29.229265 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 03:07:29.240547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 03:07:29.250586 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 03:07:29.269032 dracut-cmdline[240]: dracut-dracut-053 Jan 24 03:07:29.272495 dracut-cmdline[240]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 03:07:29.280561 systemd-resolved[235]: Positive Trust Anchors: Jan 24 03:07:29.281630 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 03:07:29.281681 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 03:07:29.290763 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 24 03:07:29.294230 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 03:07:29.295068 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 03:07:29.375374 kernel: SCSI subsystem initialized Jan 24 03:07:29.387355 kernel: Loading iSCSI transport class v2.0-870. Jan 24 03:07:29.400352 kernel: iscsi: registered transport (tcp) Jan 24 03:07:29.427454 kernel: iscsi: registered transport (qla4xxx) Jan 24 03:07:29.427572 kernel: QLogic iSCSI HBA Driver Jan 24 03:07:29.485251 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 03:07:29.496672 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 03:07:29.528693 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 03:07:29.528842 kernel: device-mapper: uevent: version 1.0.3 Jan 24 03:07:29.530472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 03:07:29.587392 kernel: raid6: sse2x4 gen() 7590 MB/s Jan 24 03:07:29.602329 kernel: raid6: sse2x2 gen() 5541 MB/s Jan 24 03:07:29.620934 kernel: raid6: sse2x1 gen() 5503 MB/s Jan 24 03:07:29.621031 kernel: raid6: using algorithm sse2x4 gen() 7590 MB/s Jan 24 03:07:29.640054 kernel: raid6: .... xor() 5017 MB/s, rmw enabled Jan 24 03:07:29.640150 kernel: raid6: using ssse3x2 recovery algorithm Jan 24 03:07:29.668336 kernel: xor: automatically using best checksumming function avx Jan 24 03:07:29.868906 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 03:07:29.885790 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 03:07:29.896892 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 03:07:29.928807 systemd-udevd[422]: Using default interface naming scheme 'v255'. Jan 24 03:07:29.936304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 03:07:29.945482 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 03:07:29.973590 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation Jan 24 03:07:30.020223 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 03:07:30.026503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 03:07:30.157115 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 03:07:30.167780 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 03:07:30.200879 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 03:07:30.204336 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 03:07:30.206789 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 03:07:30.209127 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 03:07:30.216539 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 03:07:30.249397 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 03:07:30.310858 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 24 03:07:30.311188 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 03:07:30.316317 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 24 03:07:30.329590 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 03:07:30.329645 kernel: GPT:17805311 != 125829119 Jan 24 03:07:30.329665 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 03:07:30.329695 kernel: GPT:17805311 != 125829119 Jan 24 03:07:30.329713 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 03:07:30.329731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 03:07:30.353320 kernel: libata version 3.00 loaded. Jan 24 03:07:30.366347 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 03:07:30.368145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 03:07:30.374214 kernel: AVX version of gcm_enc/dec engaged. Jan 24 03:07:30.374248 kernel: AES CTR mode by8 optimization enabled Jan 24 03:07:30.373921 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 03:07:30.374949 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 03:07:30.375131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 03:07:30.376810 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 03:07:30.385084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 03:07:30.406317 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 03:07:30.423342 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 03:07:30.424311 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 03:07:30.424613 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 03:07:30.433499 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 03:07:30.540957 kernel: scsi host0: ahci Jan 24 03:07:30.541275 kernel: scsi host1: ahci Jan 24 03:07:30.541522 kernel: scsi host2: ahci Jan 24 03:07:30.541745 kernel: scsi host3: ahci Jan 24 03:07:30.541948 kernel: scsi host4: ahci Jan 24 03:07:30.542149 kernel: scsi host5: ahci Jan 24 03:07:30.542405 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 24 03:07:30.542427 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 24 03:07:30.542445 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 24 03:07:30.542472 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 24 03:07:30.542491 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 24 03:07:30.542509 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 24 03:07:30.542527 kernel: ACPI: bus type USB registered Jan 24 03:07:30.542545 kernel: usbcore: registered new interface driver usbfs Jan 24 03:07:30.542563 kernel: usbcore: registered new interface driver hub Jan 24 03:07:30.542581 kernel: usbcore: registered new device driver usb Jan 24 03:07:30.542599 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Jan 24 03:07:30.542617 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (477) Jan 24 03:07:30.546655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 03:07:30.564749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 03:07:30.571824 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 03:07:30.577766 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 03:07:30.578645 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 03:07:30.585490 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 03:07:30.587771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 03:07:30.598398 disk-uuid[560]: Primary Header is updated. Jan 24 03:07:30.598398 disk-uuid[560]: Secondary Entries is updated. Jan 24 03:07:30.598398 disk-uuid[560]: Secondary Header is updated. Jan 24 03:07:30.606317 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 03:07:30.615352 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 03:07:30.629868 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 03:07:30.755672 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 03:07:30.755761 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 03:07:30.757504 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 03:07:30.767471 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 03:07:30.767514 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 03:07:30.769457 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 03:07:30.841532 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 24 03:07:30.842359 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 24 03:07:30.846356 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 24 03:07:30.850397 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 24 03:07:30.850665 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 24 03:07:30.850961 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 24 03:07:30.855019 kernel: hub 1-0:1.0: USB hub found Jan 24 03:07:30.855360 kernel: hub 1-0:1.0: 4 ports detected Jan 24 03:07:30.855591 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 24 03:07:30.858875 kernel: hub 2-0:1.0: USB hub found Jan 24 03:07:30.859161 kernel: hub 2-0:1.0: 4 ports detected Jan 24 03:07:31.094451 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 24 03:07:31.236317 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 03:07:31.243663 kernel: usbcore: registered new interface driver usbhid Jan 24 03:07:31.243713 kernel: usbhid: USB HID core driver Jan 24 03:07:31.251112 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 24 03:07:31.251153 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 24 03:07:31.629345 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 03:07:31.630072 disk-uuid[561]: The operation has completed successfully. Jan 24 03:07:31.698114 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 03:07:31.698414 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 03:07:31.717576 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 03:07:31.725724 sh[586]: Success Jan 24 03:07:31.745335 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 24 03:07:31.816535 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 03:07:31.827453 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 03:07:31.829418 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 03:07:31.856350 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 03:07:31.860929 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 03:07:31.860981 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 03:07:31.861004 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 03:07:31.864198 kernel: BTRFS info (device dm-0): using free space tree Jan 24 03:07:31.874447 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 03:07:31.875991 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 03:07:31.882512 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 03:07:31.886525 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 03:07:31.899322 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 03:07:31.902930 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 03:07:31.902964 kernel: BTRFS info (device vda6): using free space tree Jan 24 03:07:31.908313 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 03:07:31.921578 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 03:07:31.923601 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 03:07:31.942260 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 03:07:31.949610 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 03:07:32.069096 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 03:07:32.082609 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 03:07:32.107954 ignition[684]: Ignition 2.19.0 Jan 24 03:07:32.107980 ignition[684]: Stage: fetch-offline Jan 24 03:07:32.111795 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 03:07:32.108075 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jan 24 03:07:32.108101 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 03:07:32.108376 ignition[684]: parsed url from cmdline: "" Jan 24 03:07:32.108383 ignition[684]: no config URL provided Jan 24 03:07:32.108393 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 03:07:32.108409 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jan 24 03:07:32.108420 ignition[684]: failed to fetch config: resource requires networking Jan 24 03:07:32.108724 ignition[684]: Ignition finished successfully Jan 24 03:07:32.132801 systemd-networkd[772]: lo: Link UP Jan 24 03:07:32.132829 systemd-networkd[772]: lo: Gained carrier Jan 24 03:07:32.135643 systemd-networkd[772]: Enumeration completed Jan 24 03:07:32.135796 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 03:07:32.136535 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 03:07:32.136541 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 03:07:32.136796 systemd[1]: Reached target network.target - Network. Jan 24 03:07:32.138039 systemd-networkd[772]: eth0: Link UP Jan 24 03:07:32.138045 systemd-networkd[772]: eth0: Gained carrier Jan 24 03:07:32.138056 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 03:07:32.149540 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 03:07:32.170946 ignition[776]: Ignition 2.19.0 Jan 24 03:07:32.170966 ignition[776]: Stage: fetch Jan 24 03:07:32.171213 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 24 03:07:32.171247 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 03:07:32.171443 ignition[776]: parsed url from cmdline: "" Jan 24 03:07:32.171450 ignition[776]: no config URL provided Jan 24 03:07:32.171460 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 03:07:32.171476 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jan 24 03:07:32.171678 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 24 03:07:32.171715 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 24 03:07:32.171728 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 24 03:07:32.172073 ignition[776]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 03:07:32.198477 systemd-networkd[772]: eth0: DHCPv4 address 10.244.26.234/30, gateway 10.244.26.233 acquired from 10.244.26.233 Jan 24 03:07:32.372981 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 24 03:07:32.393782 ignition[776]: GET result: OK Jan 24 03:07:32.394642 ignition[776]: parsing config with SHA512: 3c6ac259ba6272f81bae331c2277fc994a3ab39afab2ce6489e2a3d6da253f727ef606e71ad037d2abc41ed83b5433e5cbb6450ee639406d469a52f52453a5ee Jan 24 03:07:32.401550 unknown[776]: fetched base config from "system" Jan 24 03:07:32.401570 unknown[776]: fetched base config from "system" Jan 24 03:07:32.401614 unknown[776]: fetched user config from "openstack" Jan 24 03:07:32.402798 ignition[776]: fetch: fetch complete Jan 24 03:07:32.405088 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 03:07:32.402813 ignition[776]: fetch: fetch passed Jan 24 03:07:32.402886 ignition[776]: Ignition finished successfully Jan 24 03:07:32.414503 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 03:07:32.436309 ignition[783]: Ignition 2.19.0 Jan 24 03:07:32.436337 ignition[783]: Stage: kargs Jan 24 03:07:32.436573 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 24 03:07:32.439316 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 03:07:32.436594 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 03:07:32.437798 ignition[783]: kargs: kargs passed Jan 24 03:07:32.437878 ignition[783]: Ignition finished successfully Jan 24 03:07:32.449654 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 03:07:32.472685 ignition[790]: Ignition 2.19.0 Jan 24 03:07:32.472705 ignition[790]: Stage: disks Jan 24 03:07:32.472956 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 24 03:07:32.472976 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 03:07:32.475536 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 03:07:32.474240 ignition[790]: disks: disks passed Jan 24 03:07:32.477067 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 03:07:32.474334 ignition[790]: Ignition finished successfully Jan 24 03:07:32.478098 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 03:07:32.479694 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 03:07:32.480987 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 03:07:32.482597 systemd[1]: Reached target basic.target - Basic System. Jan 24 03:07:32.491569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 03:07:32.514101 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 03:07:32.518010 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 03:07:32.531434 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 03:07:32.654325 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 03:07:32.655622 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 03:07:32.657009 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 03:07:32.666477 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 03:07:32.669517 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 03:07:32.671847 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 03:07:32.679669 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 24 03:07:32.681907 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 03:07:32.681968 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 03:07:32.692381 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (807) Jan 24 03:07:32.692417 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 03:07:32.687109 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 03:07:32.700669 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 03:07:32.700706 kernel: BTRFS info (device vda6): using free space tree Jan 24 03:07:32.706314 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 03:07:32.717158 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 03:07:32.722786 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 03:07:32.792132 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 03:07:32.803356 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 24 03:07:32.811827 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 03:07:32.822400 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 03:07:32.934983 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 03:07:32.943448 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 03:07:32.947476 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 03:07:32.958029 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 03:07:32.960332 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 03:07:32.991356 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 03:07:33.005334 ignition[924]: INFO : Ignition 2.19.0 Jan 24 03:07:33.005334 ignition[924]: INFO : Stage: mount Jan 24 03:07:33.007901 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 03:07:33.007901 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 03:07:33.007901 ignition[924]: INFO : mount: mount passed Jan 24 03:07:33.007901 ignition[924]: INFO : Ignition finished successfully Jan 24 03:07:33.008150 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 03:07:33.746633 systemd-networkd[772]: eth0: Gained IPv6LL Jan 24 03:07:35.257027 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:6ba:24:19ff:fef4:1aea/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:6ba:24:19ff:fef4:1aea/64 assigned by NDisc. Jan 24 03:07:35.257046 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 24 03:07:39.864559 coreos-metadata[809]: Jan 24 03:07:39.864 WARN failed to locate config-drive, using the metadata service API instead Jan 24 03:07:39.887577 coreos-metadata[809]: Jan 24 03:07:39.887 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 24 03:07:39.916133 coreos-metadata[809]: Jan 24 03:07:39.915 INFO Fetch successful Jan 24 03:07:39.917148 coreos-metadata[809]: Jan 24 03:07:39.917 INFO wrote hostname srv-jddbi.gb1.brightbox.com to /sysroot/etc/hostname Jan 24 03:07:39.919740 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 24 03:07:39.919952 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 24 03:07:39.930547 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 03:07:39.953800 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 03:07:39.967326 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Jan 24 03:07:39.973316 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 03:07:39.973364 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 03:07:39.973384 kernel: BTRFS info (device vda6): using free space tree Jan 24 03:07:39.979343 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 03:07:39.982036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 03:07:40.013733 ignition[958]: INFO : Ignition 2.19.0 Jan 24 03:07:40.015642 ignition[958]: INFO : Stage: files Jan 24 03:07:40.015642 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 03:07:40.015642 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 03:07:40.019101 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 24 03:07:40.020362 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 03:07:40.020362 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 03:07:40.024035 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 03:07:40.025058 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 03:07:40.026066 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 03:07:40.025228 unknown[958]: wrote ssh authorized keys file for user: core Jan 24 03:07:40.028063 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 03:07:40.028063 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 03:07:40.028063 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 03:07:40.028063 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 03:07:40.245197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 03:07:40.475336 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 03:07:40.475336 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 24 03:07:40.475336 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 03:07:40.475336 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 03:07:40.486216 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 03:07:41.082552 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 24 03:07:44.240085 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 03:07:44.240085 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 24 03:07:44.244631 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 03:07:44.244631 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 03:07:44.244631 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 24 03:07:44.244631 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 24 03:07:44.244631 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 03:07:44.244631 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 03:07:44.244631 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 24 03:07:44.244631 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 24 03:07:44.244631 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 03:07:44.244631 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 03:07:44.244631 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 03:07:44.244631 ignition[958]: INFO : files: files passed Jan 24 03:07:44.244631 ignition[958]: INFO : Ignition finished successfully Jan 24 03:07:44.244266 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 03:07:44.256625 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 03:07:44.269819 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 03:07:44.271962 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 03:07:44.272138 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 03:07:44.295194 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 03:07:44.295194 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 03:07:44.299116 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 03:07:44.301804 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 03:07:44.302994 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 03:07:44.310522 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 03:07:44.347687 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 03:07:44.347894 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 03:07:44.350103 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 03:07:44.351399 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 03:07:44.353047 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 03:07:44.365742 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 03:07:44.384976 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 03:07:44.389516 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 03:07:44.408157 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 03:07:44.409204 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 03:07:44.410880 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 03:07:44.413110 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 03:07:44.413333 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 03:07:44.415630 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 03:07:44.416537 systemd[1]: Stopped target basic.target - Basic System. Jan 24 03:07:44.417925 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 03:07:44.420474 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 03:07:44.421515 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 03:07:44.422462 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 03:07:44.423377 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 03:07:44.424367 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 03:07:44.425260 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 03:07:44.427099 systemd[1]: Stopped target swap.target - Swaps. Jan 24 03:07:44.428511 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 03:07:44.428799 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 03:07:44.430781 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 03:07:44.431683 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 03:07:44.433198 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 03:07:44.433397 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 03:07:44.434869 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 03:07:44.435184 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 03:07:44.436783 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 03:07:44.436955 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 03:07:44.438885 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 03:07:44.439098 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 03:07:44.449413 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 03:07:44.461689 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 03:07:44.465614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 03:07:44.466850 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 03:07:44.469948 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 03:07:44.472439 ignition[1010]: INFO : Ignition 2.19.0 Jan 24 03:07:44.472439 ignition[1010]: INFO : Stage: umount Jan 24 03:07:44.472439 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 03:07:44.472439 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 03:07:44.470182 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 03:07:44.482649 ignition[1010]: INFO : umount: umount passed Jan 24 03:07:44.482649 ignition[1010]: INFO : Ignition finished successfully Jan 24 03:07:44.479931 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 03:07:44.480119 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 03:07:44.483945 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 03:07:44.484108 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 03:07:44.490315 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 03:07:44.490412 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 03:07:44.493145 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 03:07:44.493222 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 03:07:44.494788 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 03:07:44.494854 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 03:07:44.496450 systemd[1]: Stopped target network.target - Network. Jan 24 03:07:44.499371 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 03:07:44.499459 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 03:07:44.500439 systemd[1]: Stopped target paths.target - Path Units. Jan 24 03:07:44.501054 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 03:07:44.505516 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 03:07:44.506576 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 03:07:44.508233 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 03:07:44.509704 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 03:07:44.509778 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 03:07:44.511145 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 03:07:44.511208 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 03:07:44.512462 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 03:07:44.512545 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 03:07:44.513913 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 03:07:44.513980 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 03:07:44.515641 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 03:07:44.518260 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 03:07:44.520835 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 03:07:44.521701 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 03:07:44.521855 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 03:07:44.521986 systemd-networkd[772]: eth0: DHCPv6 lease lost Jan 24 03:07:44.524965 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 03:07:44.525231 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 03:07:44.528654 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 03:07:44.528777 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 03:07:44.533118 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 03:07:44.533236 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 03:07:44.541539 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 03:07:44.543244 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 03:07:44.543367 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 03:07:44.547974 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 03:07:44.550042 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 03:07:44.550242 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 03:07:44.560788 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 03:07:44.561945 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 03:07:44.564970 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 03:07:44.565132 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 03:07:44.568171 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 03:07:44.568262 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 03:07:44.570095 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 03:07:44.570156 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 03:07:44.573416 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 03:07:44.573511 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 03:07:44.575803 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 03:07:44.575870 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 03:07:44.576773 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 03:07:44.576842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 03:07:44.586608 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 03:07:44.589181 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 03:07:44.589311 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 03:07:44.591827 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 03:07:44.591901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 03:07:44.592642 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 03:07:44.592710 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 03:07:44.593553 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 03:07:44.593619 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 03:07:44.595766 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 03:07:44.595835 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 03:07:44.598202 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 03:07:44.598271 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 03:07:44.600741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 03:07:44.600813 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 03:07:44.604984 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 03:07:44.605210 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 03:07:44.606650 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 03:07:44.612561 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 03:07:44.627761 systemd[1]: Switching root. Jan 24 03:07:44.662175 systemd-journald[203]: Journal stopped Jan 24 03:07:46.224584 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 24 03:07:46.224750 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 03:07:46.224791 kernel: SELinux: policy capability open_perms=1 Jan 24 03:07:46.224812 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 03:07:46.224831 kernel: SELinux: policy capability always_check_network=0 Jan 24 03:07:46.224849 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 03:07:46.224869 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 03:07:46.224894 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 03:07:46.224913 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 03:07:46.224944 kernel: audit: type=1403 audit(1769224064.980:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 03:07:46.224996 systemd[1]: Successfully loaded SELinux policy in 49.339ms. Jan 24 03:07:46.225036 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.793ms. Jan 24 03:07:46.225066 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 03:07:46.225088 systemd[1]: Detected virtualization kvm. Jan 24 03:07:46.225108 systemd[1]: Detected architecture x86-64. Jan 24 03:07:46.225128 systemd[1]: Detected first boot. Jan 24 03:07:46.225148 systemd[1]: Hostname set to . Jan 24 03:07:46.225181 systemd[1]: Initializing machine ID from VM UUID. Jan 24 03:07:46.225203 zram_generator::config[1069]: No configuration found. Jan 24 03:07:46.225224 systemd[1]: Populated /etc with preset unit settings. Jan 24 03:07:46.225244 systemd[1]: Queued start job for default target multi-user.target. Jan 24 03:07:46.225265 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 03:07:46.225302 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 03:07:46.225333 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 03:07:46.225361 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 03:07:46.225399 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 03:07:46.225421 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 03:07:46.225442 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 03:07:46.225462 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 03:07:46.225482 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 03:07:46.225501 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 03:07:46.225522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 03:07:46.225542 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 03:07:46.225563 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 03:07:46.225596 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 03:07:46.225618 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 03:07:46.225638 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 03:07:46.225658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 03:07:46.225693 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 03:07:46.225762 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 03:07:46.225801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 03:07:46.225840 systemd[1]: Reached target slices.target - Slice Units. Jan 24 03:07:46.225864 systemd[1]: Reached target swap.target - Swaps. Jan 24 03:07:46.225886 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 03:07:46.225907 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 03:07:46.225957 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 03:07:46.226023 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 03:07:46.226047 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 03:07:46.226068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 03:07:46.226088 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 03:07:46.226109 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 03:07:46.226129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 03:07:46.226151 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 03:07:46.226171 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 03:07:46.226192 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 03:07:46.226229 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 03:07:46.226252 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 03:07:46.226273 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 03:07:46.229467 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 03:07:46.229497 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 03:07:46.229525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 03:07:46.229546 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 03:07:46.229577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 03:07:46.229607 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 03:07:46.229645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 03:07:46.229667 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 03:07:46.229687 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 03:07:46.229708 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 03:07:46.229729 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 24 03:07:46.229750 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 24 03:07:46.229770 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 03:07:46.229789 kernel: fuse: init (API version 7.39) Jan 24 03:07:46.229822 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 03:07:46.229844 kernel: loop: module loaded Jan 24 03:07:46.229864 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 03:07:46.229885 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 03:07:46.229905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 03:07:46.229934 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 03:07:46.229961 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 03:07:46.229996 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 03:07:46.230025 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 03:07:46.230059 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 03:07:46.230088 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 03:07:46.230116 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 03:07:46.230177 systemd-journald[1177]: Collecting audit messages is disabled. Jan 24 03:07:46.230224 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 03:07:46.230246 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 03:07:46.230266 kernel: ACPI: bus type drm_connector registered Jan 24 03:07:46.230303 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 03:07:46.230378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 03:07:46.230403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 03:07:46.230424 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 03:07:46.230444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 03:07:46.230479 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 03:07:46.230503 systemd-journald[1177]: Journal started Jan 24 03:07:46.230536 systemd-journald[1177]: Runtime Journal (/run/log/journal/894bf4abc07c4305ba98d65e70b6eb0b) is 4.7M, max 38.0M, 33.2M free. Jan 24 03:07:46.235057 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 03:07:46.236234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 03:07:46.237552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 03:07:46.240771 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 03:07:46.241020 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 03:07:46.243739 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 03:07:46.244005 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 03:07:46.245162 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 03:07:46.246906 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 03:07:46.248590 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 03:07:46.264347 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 03:07:46.272459 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 03:07:46.278398 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 03:07:46.281468 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 03:07:46.294566 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 03:07:46.302507 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 03:07:46.303690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 03:07:46.313013 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 03:07:46.316612 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 03:07:46.323565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 03:07:46.335415 systemd-journald[1177]: Time spent on flushing to /var/log/journal/894bf4abc07c4305ba98d65e70b6eb0b is 93.443ms for 1125 entries. Jan 24 03:07:46.335415 systemd-journald[1177]: System Journal (/var/log/journal/894bf4abc07c4305ba98d65e70b6eb0b) is 8.0M, max 584.8M, 576.8M free. Jan 24 03:07:46.465947 systemd-journald[1177]: Received client request to flush runtime journal. Jan 24 03:07:46.337240 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 03:07:46.348309 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 03:07:46.349942 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 03:07:46.375796 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 03:07:46.380934 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 03:07:46.424141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 03:07:46.454496 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Jan 24 03:07:46.454562 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Jan 24 03:07:46.461954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 03:07:46.476640 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 03:07:46.479668 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 03:07:46.488706 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 03:07:46.501479 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 03:07:46.506682 udevadm[1239]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 03:07:46.546713 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 03:07:46.557618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 03:07:46.582686 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 24 03:07:46.583224 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 24 03:07:46.592912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 03:07:47.077548 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 03:07:47.086523 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 03:07:47.123500 systemd-udevd[1255]: Using default interface naming scheme 'v255'. Jan 24 03:07:47.152450 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 03:07:47.164461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 03:07:47.193481 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 03:07:47.263571 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 24 03:07:47.288438 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 03:07:47.398334 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1264) Jan 24 03:07:47.427402 systemd-networkd[1260]: lo: Link UP Jan 24 03:07:47.427420 systemd-networkd[1260]: lo: Gained carrier Jan 24 03:07:47.430587 systemd-networkd[1260]: Enumeration completed Jan 24 03:07:47.431277 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 03:07:47.431410 systemd-networkd[1260]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 03:07:47.433160 systemd-networkd[1260]: eth0: Link UP Jan 24 03:07:47.433330 systemd-networkd[1260]: eth0: Gained carrier Jan 24 03:07:47.433436 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 03:07:47.436404 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 03:07:47.441477 systemd-networkd[1260]: eth0: DHCPv4 address 10.244.26.234/30, gateway 10.244.26.233 acquired from 10.244.26.233 Jan 24 03:07:47.443868 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 03:07:47.450309 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 03:07:47.459307 kernel: ACPI: button: Power Button [PWRF] Jan 24 03:07:47.456557 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 03:07:47.476348 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 03:07:47.524078 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 03:07:47.540335 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 03:07:47.540998 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 03:07:47.545484 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 03:07:47.545769 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 03:07:47.627723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 03:07:47.774267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 03:07:47.823833 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 03:07:47.834677 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 03:07:47.856103 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 03:07:47.895228 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 03:07:47.897049 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 03:07:47.906594 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 03:07:47.913898 lvm[1298]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 03:07:47.954733 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 03:07:47.956492 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 03:07:47.957439 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 03:07:47.957594 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 03:07:47.958426 systemd[1]: Reached target machines.target - Containers. Jan 24 03:07:47.960977 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 03:07:47.967566 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 03:07:47.972539 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 03:07:47.973633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 03:07:47.981507 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 03:07:47.987521 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 03:07:47.998493 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 03:07:48.003832 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 03:07:48.011065 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 03:07:48.043378 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 03:07:48.062436 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 03:07:48.067094 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 03:07:48.082487 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 03:07:48.110640 kernel: loop1: detected capacity change from 0 to 8 Jan 24 03:07:48.145511 kernel: loop2: detected capacity change from 0 to 224512 Jan 24 03:07:48.188327 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 03:07:48.243367 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 03:07:48.272338 kernel: loop5: detected capacity change from 0 to 8 Jan 24 03:07:48.277310 kernel: loop6: detected capacity change from 0 to 224512 Jan 24 03:07:48.295444 kernel: loop7: detected capacity change from 0 to 140768 Jan 24 03:07:48.313468 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 24 03:07:48.314266 (sd-merge)[1320]: Merged extensions into '/usr'. Jan 24 03:07:48.342063 systemd[1]: Reloading requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 03:07:48.342104 systemd[1]: Reloading... Jan 24 03:07:48.450320 zram_generator::config[1345]: No configuration found. Jan 24 03:07:48.639470 ldconfig[1302]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 03:07:48.685526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 03:07:48.781466 systemd[1]: Reloading finished in 438 ms. Jan 24 03:07:48.805228 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 03:07:48.812661 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 03:07:48.825803 systemd[1]: Starting ensure-sysext.service... Jan 24 03:07:48.830475 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 03:07:48.836108 systemd[1]: Reloading requested from client PID 1412 ('systemctl') (unit ensure-sysext.service)... Jan 24 03:07:48.836138 systemd[1]: Reloading... Jan 24 03:07:48.883100 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 03:07:48.883743 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 03:07:48.887498 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 03:07:48.887924 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jan 24 03:07:48.888055 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jan 24 03:07:48.896910 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 03:07:48.896929 systemd-tmpfiles[1413]: Skipping /boot Jan 24 03:07:48.923374 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 03:07:48.923449 systemd-tmpfiles[1413]: Skipping /boot Jan 24 03:07:48.925369 zram_generator::config[1437]: No configuration found. Jan 24 03:07:49.137114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 03:07:49.230801 systemd[1]: Reloading finished in 394 ms. Jan 24 03:07:49.256700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 03:07:49.275535 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 03:07:49.282505 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 03:07:49.292529 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 03:07:49.301535 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 03:07:49.314540 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 03:07:49.330775 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 03:07:49.331502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 03:07:49.338627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 03:07:49.351649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 03:07:49.359589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 03:07:49.363769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 03:07:49.363958 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 03:07:49.369170 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 03:07:49.373800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 03:07:49.374068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 03:07:49.383776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 03:07:49.384050 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 03:07:49.405844 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 03:07:49.415204 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 03:07:49.415602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 03:07:49.428615 systemd-networkd[1260]: eth0: Gained IPv6LL Jan 24 03:07:49.433312 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 03:07:49.434037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 03:07:49.447655 augenrules[1539]: No rules Jan 24 03:07:49.453418 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 03:07:49.464447 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 03:07:49.470629 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 03:07:49.475542 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 03:07:49.483657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 03:07:49.495650 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 03:07:49.497478 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 03:07:49.502094 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 03:07:49.513176 systemd-resolved[1515]: Positive Trust Anchors: Jan 24 03:07:49.513233 systemd-resolved[1515]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 03:07:49.513279 systemd-resolved[1515]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 03:07:49.515437 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 03:07:49.519738 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 03:07:49.524781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 03:07:49.525047 systemd-resolved[1515]: Using system hostname 'srv-jddbi.gb1.brightbox.com'. Jan 24 03:07:49.527767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 03:07:49.529091 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 03:07:49.530713 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 03:07:49.530992 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 03:07:49.532667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 03:07:49.532907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 03:07:49.534605 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 03:07:49.534917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 03:07:49.546082 systemd[1]: Finished ensure-sysext.service. Jan 24 03:07:49.547464 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 03:07:49.556504 systemd[1]: Reached target network.target - Network. Jan 24 03:07:49.557440 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 03:07:49.558231 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 03:07:49.559123 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 03:07:49.559225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 03:07:49.565496 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 03:07:49.566423 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 03:07:49.652847 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 03:07:49.654194 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 03:07:49.655252 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 03:07:49.656103 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 03:07:49.656908 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 03:07:49.657774 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 03:07:49.657830 systemd[1]: Reached target paths.target - Path Units. Jan 24 03:07:49.658499 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 03:07:49.659489 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 03:07:49.660415 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 03:07:49.661269 systemd[1]: Reached target timers.target - Timer Units. Jan 24 03:07:49.663324 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 03:07:49.666216 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 03:07:49.669187 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 03:07:49.672865 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 03:07:49.673846 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 03:07:49.674560 systemd[1]: Reached target basic.target - Basic System. Jan 24 03:07:49.675574 systemd[1]: System is tainted: cgroupsv1 Jan 24 03:07:49.675643 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 03:07:49.675701 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 03:07:49.679432 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 03:07:49.698582 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 03:07:49.704387 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 03:07:49.714436 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 03:07:49.720475 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 03:07:49.723377 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 03:07:49.727135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:07:49.735315 jq[1577]: false Jan 24 03:07:49.746575 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 03:07:49.763542 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 03:07:49.776448 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 03:07:49.782261 dbus-daemon[1576]: [system] SELinux support is enabled Jan 24 03:07:49.784521 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 03:07:49.792352 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 03:07:49.794057 dbus-daemon[1576]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1260 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 03:07:49.814578 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 03:07:49.818442 extend-filesystems[1578]: Found loop4 Jan 24 03:07:49.818442 extend-filesystems[1578]: Found loop5 Jan 24 03:07:49.818442 extend-filesystems[1578]: Found loop6 Jan 24 03:07:49.826774 extend-filesystems[1578]: Found loop7 Jan 24 03:07:49.826774 extend-filesystems[1578]: Found vda Jan 24 03:07:49.826774 extend-filesystems[1578]: Found vda1 Jan 24 03:07:49.826774 extend-filesystems[1578]: Found vda2 Jan 24 03:07:49.826774 extend-filesystems[1578]: Found vda3 Jan 24 03:07:49.826774 extend-filesystems[1578]: Found usr Jan 24 03:07:49.826774 extend-filesystems[1578]: Found vda4 Jan 24 03:07:49.826774 extend-filesystems[1578]: Found vda6 Jan 24 03:07:49.826774 extend-filesystems[1578]: Found vda7 Jan 24 03:07:49.826774 extend-filesystems[1578]: Found vda9 Jan 24 03:07:49.826774 extend-filesystems[1578]: Checking size of /dev/vda9 Jan 24 03:07:49.820264 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 03:07:49.869464 extend-filesystems[1578]: Resized partition /dev/vda9 Jan 24 03:07:49.833232 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 03:07:50.934219 extend-filesystems[1610]: resize2fs 1.47.1 (20-May-2024) Jan 24 03:07:50.945961 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 24 03:07:49.849411 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 03:07:49.856608 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 03:07:49.872241 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 03:07:49.874867 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 03:07:49.887430 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 03:07:50.928268 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 03:07:50.928288 systemd-resolved[1515]: Clock change detected. Flushing caches. Jan 24 03:07:50.928585 systemd-timesyncd[1569]: Contacted time server 139.162.242.115:123 (0.flatcar.pool.ntp.org). Jan 24 03:07:50.928683 systemd-timesyncd[1569]: Initial clock synchronization to Sat 2026-01-24 03:07:50.928206 UTC. Jan 24 03:07:50.947742 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 03:07:50.948122 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 03:07:50.974992 jq[1604]: true Jan 24 03:07:50.988989 dbus-daemon[1576]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 03:07:51.008854 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 03:07:51.008901 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 03:07:51.016671 update_engine[1603]: I20260124 03:07:51.016429 1603 main.cc:92] Flatcar Update Engine starting Jan 24 03:07:51.017191 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 03:07:51.018189 (ntainerd)[1627]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 03:07:51.018711 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 03:07:51.018752 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 03:07:51.039801 systemd[1]: Started update-engine.service - Update Engine. Jan 24 03:07:51.052388 update_engine[1603]: I20260124 03:07:51.043049 1603 update_check_scheduler.cc:74] Next update check in 6m50s Jan 24 03:07:51.043980 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 03:07:51.045817 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 03:07:51.061361 tar[1616]: linux-amd64/LICENSE Jan 24 03:07:51.061361 tar[1616]: linux-amd64/helm Jan 24 03:07:51.078065 jq[1626]: true Jan 24 03:07:51.093680 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1257) Jan 24 03:07:51.113849 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 03:07:51.179430 systemd-logind[1597]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 03:07:51.179475 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 03:07:51.201981 systemd-logind[1597]: New seat seat0. Jan 24 03:07:51.206234 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 03:07:51.390630 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 24 03:07:51.424760 extend-filesystems[1610]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 03:07:51.424760 extend-filesystems[1610]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 24 03:07:51.424760 extend-filesystems[1610]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 24 03:07:51.458665 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Jan 24 03:07:51.430808 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 03:07:51.468106 bash[1652]: Updated "/home/core/.ssh/authorized_keys" Jan 24 03:07:51.451252 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 03:07:51.451829 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 03:07:51.476979 systemd[1]: Starting sshkeys.service... Jan 24 03:07:51.496197 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 03:07:51.505084 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 03:07:51.597015 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 03:07:51.620758 dbus-daemon[1576]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 03:07:51.624335 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 03:07:51.632436 dbus-daemon[1576]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1631 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 03:07:51.648089 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 03:07:51.689142 polkitd[1679]: Started polkitd version 121 Jan 24 03:07:51.712777 polkitd[1679]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 03:07:51.712916 polkitd[1679]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 03:07:51.714307 polkitd[1679]: Finished loading, compiling and executing 2 rules Jan 24 03:07:51.717857 dbus-daemon[1576]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 03:07:51.718343 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 03:07:51.720707 polkitd[1679]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 03:07:51.753271 systemd-hostnamed[1631]: Hostname set to (static) Jan 24 03:07:51.764426 containerd[1627]: time="2026-01-24T03:07:51.762123139Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 03:07:51.768861 systemd-networkd[1260]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:6ba:24:19ff:fef4:1aea/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:6ba:24:19ff:fef4:1aea/64 assigned by NDisc. Jan 24 03:07:51.768872 systemd-networkd[1260]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 24 03:07:51.872776 containerd[1627]: time="2026-01-24T03:07:51.872306635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 03:07:51.880955 sshd_keygen[1611]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.887913924Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.887966668Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.887991707Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.888328211Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.888364347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.888476517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.888500175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.888806516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.888832254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.888853692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 03:07:51.892017 containerd[1627]: time="2026-01-24T03:07:51.888871840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 03:07:51.892506 containerd[1627]: time="2026-01-24T03:07:51.889051815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 03:07:51.892506 containerd[1627]: time="2026-01-24T03:07:51.889474322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 03:07:51.899467 containerd[1627]: time="2026-01-24T03:07:51.898744434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 03:07:51.899467 containerd[1627]: time="2026-01-24T03:07:51.898794811Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 03:07:51.899467 containerd[1627]: time="2026-01-24T03:07:51.899074362Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 03:07:51.899467 containerd[1627]: time="2026-01-24T03:07:51.899188447Z" level=info msg="metadata content store policy set" policy=shared Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.908729920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.908822984Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.908854405Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.908878876Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.908911058Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.912828257Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.913582894Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.913876798Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.913905109Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.913928046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.913968542Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.913999943Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.914037207Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 03:07:51.914868 containerd[1627]: time="2026-01-24T03:07:51.914061375Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914083425Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914116443Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914139458Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914159398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914257051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914311280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914335733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914379469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914445010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914470172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914493765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914521775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914544491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.915553 containerd[1627]: time="2026-01-24T03:07:51.914567820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.916070 containerd[1627]: time="2026-01-24T03:07:51.914588205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.916070 containerd[1627]: time="2026-01-24T03:07:51.914647038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.928547512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.928689986Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.928770421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.928850814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.928875543Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.929008270Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.929049334Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.929090654Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.929113630Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.929130435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.929185126Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.929229757Z" level=info msg="NRI interface is disabled by configuration." Jan 24 03:07:52.082762 containerd[1627]: time="2026-01-24T03:07:51.929272781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 03:07:51.938105 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.929789469Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.929894939Z" level=info msg="Connect containerd service" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.930005692Z" level=info msg="using legacy CRI server" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.930025056Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.930268614Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.936719178Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.937351383Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.937437645Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.937507116Z" level=info msg="Start subscribing containerd event" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.937573749Z" level=info msg="Start recovering state" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.937791464Z" level=info msg="Start event monitor" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.937820034Z" level=info msg="Start snapshots syncer" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.937840035Z" level=info msg="Start cni network conf syncer for default" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.937852770Z" level=info msg="Start streaming server" Jan 24 03:07:52.083456 containerd[1627]: time="2026-01-24T03:07:51.939399048Z" level=info msg="containerd successfully booted in 0.182052s" Jan 24 03:07:52.111790 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 03:07:52.126412 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 03:07:52.154340 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 03:07:52.154769 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 03:07:52.171804 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 03:07:52.303258 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 03:07:52.318269 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 03:07:52.331226 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 03:07:52.334574 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 03:07:53.019510 tar[1616]: linux-amd64/README.md Jan 24 03:07:53.049359 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 03:07:53.269871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:07:53.282346 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 03:07:54.020109 kubelet[1728]: E0124 03:07:54.020023 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 03:07:54.022344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 03:07:54.022710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 03:07:54.544081 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 03:07:54.553033 systemd[1]: Started sshd@0-10.244.26.234:22-20.161.92.111:58142.service - OpenSSH per-connection server daemon (20.161.92.111:58142). Jan 24 03:07:55.127125 sshd[1738]: Accepted publickey for core from 20.161.92.111 port 58142 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:07:55.130886 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:07:55.149692 systemd-logind[1597]: New session 1 of user core. Jan 24 03:07:55.151137 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 03:07:55.168031 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 03:07:55.187286 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 03:07:55.201169 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 03:07:55.207359 (systemd)[1745]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 03:07:55.351311 systemd[1745]: Queued start job for default target default.target. Jan 24 03:07:55.351875 systemd[1745]: Created slice app.slice - User Application Slice. Jan 24 03:07:55.351914 systemd[1745]: Reached target paths.target - Paths. Jan 24 03:07:55.351937 systemd[1745]: Reached target timers.target - Timers. Jan 24 03:07:55.363807 systemd[1745]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 03:07:55.374672 systemd[1745]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 03:07:55.375433 systemd[1745]: Reached target sockets.target - Sockets. Jan 24 03:07:55.375460 systemd[1745]: Reached target basic.target - Basic System. Jan 24 03:07:55.375545 systemd[1745]: Reached target default.target - Main User Target. Jan 24 03:07:55.375623 systemd[1745]: Startup finished in 159ms. Jan 24 03:07:55.375763 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 03:07:55.386435 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 03:07:55.808133 systemd[1]: Started sshd@1-10.244.26.234:22-20.161.92.111:58154.service - OpenSSH per-connection server daemon (20.161.92.111:58154). Jan 24 03:07:56.374818 sshd[1757]: Accepted publickey for core from 20.161.92.111 port 58154 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:07:56.376875 sshd[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:07:56.383984 systemd-logind[1597]: New session 2 of user core. Jan 24 03:07:56.395186 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 03:07:56.786574 sshd[1757]: pam_unix(sshd:session): session closed for user core Jan 24 03:07:56.790917 systemd[1]: sshd@1-10.244.26.234:22-20.161.92.111:58154.service: Deactivated successfully. Jan 24 03:07:56.795818 systemd-logind[1597]: Session 2 logged out. Waiting for processes to exit. Jan 24 03:07:56.796447 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 03:07:56.798191 systemd-logind[1597]: Removed session 2. Jan 24 03:07:56.885994 systemd[1]: Started sshd@2-10.244.26.234:22-20.161.92.111:58168.service - OpenSSH per-connection server daemon (20.161.92.111:58168). Jan 24 03:07:57.371893 login[1713]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 03:07:57.382135 systemd-logind[1597]: New session 3 of user core. Jan 24 03:07:57.386316 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 03:07:57.388458 login[1712]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 03:07:57.406964 systemd-logind[1597]: New session 4 of user core. Jan 24 03:07:57.419323 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 03:07:57.449393 sshd[1765]: Accepted publickey for core from 20.161.92.111 port 58168 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:07:57.451352 sshd[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:07:57.460740 systemd-logind[1597]: New session 5 of user core. Jan 24 03:07:57.470242 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 03:07:57.851959 sshd[1765]: pam_unix(sshd:session): session closed for user core Jan 24 03:07:57.858098 systemd[1]: sshd@2-10.244.26.234:22-20.161.92.111:58168.service: Deactivated successfully. Jan 24 03:07:57.861550 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 03:07:57.863267 systemd-logind[1597]: Session 5 logged out. Waiting for processes to exit. Jan 24 03:07:57.865009 systemd-logind[1597]: Removed session 5. Jan 24 03:07:58.129758 coreos-metadata[1574]: Jan 24 03:07:58.128 WARN failed to locate config-drive, using the metadata service API instead Jan 24 03:07:58.154865 coreos-metadata[1574]: Jan 24 03:07:58.154 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 24 03:07:58.161522 coreos-metadata[1574]: Jan 24 03:07:58.161 INFO Fetch failed with 404: resource not found Jan 24 03:07:58.161522 coreos-metadata[1574]: Jan 24 03:07:58.161 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 24 03:07:58.162277 coreos-metadata[1574]: Jan 24 03:07:58.162 INFO Fetch successful Jan 24 03:07:58.162491 coreos-metadata[1574]: Jan 24 03:07:58.162 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 24 03:07:58.187005 coreos-metadata[1574]: Jan 24 03:07:58.186 INFO Fetch successful Jan 24 03:07:58.187005 coreos-metadata[1574]: Jan 24 03:07:58.186 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 24 03:07:58.224095 coreos-metadata[1574]: Jan 24 03:07:58.224 INFO Fetch successful Jan 24 03:07:58.224095 coreos-metadata[1574]: Jan 24 03:07:58.224 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 24 03:07:58.246651 coreos-metadata[1574]: Jan 24 03:07:58.246 INFO Fetch successful Jan 24 03:07:58.246651 coreos-metadata[1574]: Jan 24 03:07:58.246 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 24 03:07:58.269324 coreos-metadata[1574]: Jan 24 03:07:58.269 INFO Fetch successful Jan 24 03:07:58.299711 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 03:07:58.301372 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 03:07:58.707026 coreos-metadata[1674]: Jan 24 03:07:58.706 WARN failed to locate config-drive, using the metadata service API instead Jan 24 03:07:58.729185 coreos-metadata[1674]: Jan 24 03:07:58.729 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 24 03:07:58.749177 coreos-metadata[1674]: Jan 24 03:07:58.749 INFO Fetch successful Jan 24 03:07:58.749344 coreos-metadata[1674]: Jan 24 03:07:58.749 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 03:07:58.783978 coreos-metadata[1674]: Jan 24 03:07:58.783 INFO Fetch successful Jan 24 03:07:58.786060 unknown[1674]: wrote ssh authorized keys file for user: core Jan 24 03:07:58.808828 update-ssh-keys[1814]: Updated "/home/core/.ssh/authorized_keys" Jan 24 03:07:58.809810 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 03:07:58.817943 systemd[1]: Finished sshkeys.service. Jan 24 03:07:58.822863 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 03:07:58.823329 systemd[1]: Startup finished in 17.682s (kernel) + 12.850s (userspace) = 30.532s. Jan 24 03:08:04.273115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 03:08:04.279882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:08:04.474817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:08:04.480533 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 03:08:04.599274 kubelet[1832]: E0124 03:08:04.599094 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 03:08:04.603781 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 03:08:04.604191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 03:08:07.951058 systemd[1]: Started sshd@3-10.244.26.234:22-20.161.92.111:39714.service - OpenSSH per-connection server daemon (20.161.92.111:39714). Jan 24 03:08:08.527988 sshd[1840]: Accepted publickey for core from 20.161.92.111 port 39714 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:08:08.529935 sshd[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:08:08.538881 systemd-logind[1597]: New session 6 of user core. Jan 24 03:08:08.541111 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 03:08:08.951647 sshd[1840]: pam_unix(sshd:session): session closed for user core Jan 24 03:08:08.955740 systemd[1]: sshd@3-10.244.26.234:22-20.161.92.111:39714.service: Deactivated successfully. Jan 24 03:08:08.960047 systemd-logind[1597]: Session 6 logged out. Waiting for processes to exit. Jan 24 03:08:08.960110 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 03:08:08.963008 systemd-logind[1597]: Removed session 6. Jan 24 03:08:09.048208 systemd[1]: Started sshd@4-10.244.26.234:22-20.161.92.111:39726.service - OpenSSH per-connection server daemon (20.161.92.111:39726). Jan 24 03:08:09.631280 sshd[1848]: Accepted publickey for core from 20.161.92.111 port 39726 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:08:09.633429 sshd[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:08:09.639866 systemd-logind[1597]: New session 7 of user core. Jan 24 03:08:09.653571 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 03:08:10.041003 sshd[1848]: pam_unix(sshd:session): session closed for user core Jan 24 03:08:10.045925 systemd[1]: sshd@4-10.244.26.234:22-20.161.92.111:39726.service: Deactivated successfully. Jan 24 03:08:10.049875 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 03:08:10.050278 systemd-logind[1597]: Session 7 logged out. Waiting for processes to exit. Jan 24 03:08:10.053029 systemd-logind[1597]: Removed session 7. Jan 24 03:08:10.154755 systemd[1]: Started sshd@5-10.244.26.234:22-20.161.92.111:39740.service - OpenSSH per-connection server daemon (20.161.92.111:39740). Jan 24 03:08:10.728615 sshd[1856]: Accepted publickey for core from 20.161.92.111 port 39740 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:08:10.731193 sshd[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:08:10.739100 systemd-logind[1597]: New session 8 of user core. Jan 24 03:08:10.750211 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 03:08:11.150995 sshd[1856]: pam_unix(sshd:session): session closed for user core Jan 24 03:08:11.154981 systemd[1]: sshd@5-10.244.26.234:22-20.161.92.111:39740.service: Deactivated successfully. Jan 24 03:08:11.159552 systemd-logind[1597]: Session 8 logged out. Waiting for processes to exit. Jan 24 03:08:11.160535 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 03:08:11.162312 systemd-logind[1597]: Removed session 8. Jan 24 03:08:11.254037 systemd[1]: Started sshd@6-10.244.26.234:22-20.161.92.111:39742.service - OpenSSH per-connection server daemon (20.161.92.111:39742). Jan 24 03:08:11.870968 sshd[1864]: Accepted publickey for core from 20.161.92.111 port 39742 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:08:11.873199 sshd[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:08:11.881383 systemd-logind[1597]: New session 9 of user core. Jan 24 03:08:11.888970 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 03:08:12.214646 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 03:08:12.215127 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 03:08:12.228615 sudo[1868]: pam_unix(sudo:session): session closed for user root Jan 24 03:08:12.319332 sshd[1864]: pam_unix(sshd:session): session closed for user core Jan 24 03:08:12.325118 systemd[1]: sshd@6-10.244.26.234:22-20.161.92.111:39742.service: Deactivated successfully. Jan 24 03:08:12.328547 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 03:08:12.328755 systemd-logind[1597]: Session 9 logged out. Waiting for processes to exit. Jan 24 03:08:12.331574 systemd-logind[1597]: Removed session 9. Jan 24 03:08:12.428088 systemd[1]: Started sshd@7-10.244.26.234:22-20.161.92.111:53688.service - OpenSSH per-connection server daemon (20.161.92.111:53688). Jan 24 03:08:13.046676 sshd[1873]: Accepted publickey for core from 20.161.92.111 port 53688 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:08:13.048673 sshd[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:08:13.055746 systemd-logind[1597]: New session 10 of user core. Jan 24 03:08:13.066116 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 03:08:13.375307 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 03:08:13.375786 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 03:08:13.381443 sudo[1878]: pam_unix(sudo:session): session closed for user root Jan 24 03:08:13.389785 sudo[1877]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 03:08:13.390239 sudo[1877]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 03:08:13.408973 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 03:08:13.423328 auditctl[1881]: No rules Jan 24 03:08:13.424129 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 03:08:13.424464 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 03:08:13.434448 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 03:08:13.469938 augenrules[1901]: No rules Jan 24 03:08:13.471525 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 03:08:13.473530 sudo[1877]: pam_unix(sudo:session): session closed for user root Jan 24 03:08:13.564810 sshd[1873]: pam_unix(sshd:session): session closed for user core Jan 24 03:08:13.568669 systemd[1]: sshd@7-10.244.26.234:22-20.161.92.111:53688.service: Deactivated successfully. Jan 24 03:08:13.572589 systemd-logind[1597]: Session 10 logged out. Waiting for processes to exit. Jan 24 03:08:13.574137 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 03:08:13.575564 systemd-logind[1597]: Removed session 10. Jan 24 03:08:13.662960 systemd[1]: Started sshd@8-10.244.26.234:22-20.161.92.111:53704.service - OpenSSH per-connection server daemon (20.161.92.111:53704). Jan 24 03:08:14.226495 sshd[1910]: Accepted publickey for core from 20.161.92.111 port 53704 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:08:14.228541 sshd[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:08:14.235811 systemd-logind[1597]: New session 11 of user core. Jan 24 03:08:14.243044 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 03:08:14.542276 sudo[1914]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 03:08:14.542803 sudo[1914]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 03:08:14.639383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 03:08:14.648866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:08:14.987894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:08:15.013191 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 03:08:15.234956 kubelet[1934]: E0124 03:08:15.234849 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 03:08:15.239910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 03:08:15.240415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 03:08:15.650099 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 03:08:15.650376 (dockerd)[1949]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 03:08:16.461076 dockerd[1949]: time="2026-01-24T03:08:16.460956958Z" level=info msg="Starting up" Jan 24 03:08:16.616344 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2647037117-merged.mount: Deactivated successfully. Jan 24 03:08:16.784526 dockerd[1949]: time="2026-01-24T03:08:16.783735254Z" level=info msg="Loading containers: start." Jan 24 03:08:16.963640 kernel: Initializing XFRM netlink socket Jan 24 03:08:17.078965 systemd-networkd[1260]: docker0: Link UP Jan 24 03:08:17.100764 dockerd[1949]: time="2026-01-24T03:08:17.100648143Z" level=info msg="Loading containers: done." Jan 24 03:08:17.129490 dockerd[1949]: time="2026-01-24T03:08:17.128571958Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 03:08:17.129490 dockerd[1949]: time="2026-01-24T03:08:17.128885983Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 03:08:17.129490 dockerd[1949]: time="2026-01-24T03:08:17.129069059Z" level=info msg="Daemon has completed initialization" Jan 24 03:08:17.190349 dockerd[1949]: time="2026-01-24T03:08:17.190241782Z" level=info msg="API listen on /run/docker.sock" Jan 24 03:08:17.191349 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 03:08:18.566630 containerd[1627]: time="2026-01-24T03:08:18.566003650Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 03:08:19.502087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730175514.mount: Deactivated successfully. Jan 24 03:08:21.810023 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 03:08:21.852006 containerd[1627]: time="2026-01-24T03:08:21.851907814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:21.853753 containerd[1627]: time="2026-01-24T03:08:21.853685478Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 24 03:08:21.855194 containerd[1627]: time="2026-01-24T03:08:21.855142220Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:21.861648 containerd[1627]: time="2026-01-24T03:08:21.859736163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:21.862002 containerd[1627]: time="2026-01-24T03:08:21.861950428Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.295840004s" Jan 24 03:08:21.862162 containerd[1627]: time="2026-01-24T03:08:21.862134025Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 03:08:21.863896 containerd[1627]: time="2026-01-24T03:08:21.863839041Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 03:08:25.389160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 03:08:25.395821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:08:25.683819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:08:25.696324 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 03:08:25.799381 kubelet[2167]: E0124 03:08:25.799251 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 03:08:25.801273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 03:08:25.801649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 03:08:28.076810 containerd[1627]: time="2026-01-24T03:08:28.075614420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:28.078021 containerd[1627]: time="2026-01-24T03:08:28.077969363Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 24 03:08:28.079680 containerd[1627]: time="2026-01-24T03:08:28.079648453Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:28.084786 containerd[1627]: time="2026-01-24T03:08:28.084746075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:28.086891 containerd[1627]: time="2026-01-24T03:08:28.086817673Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 6.222793353s" Jan 24 03:08:28.086969 containerd[1627]: time="2026-01-24T03:08:28.086896472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 03:08:28.088628 containerd[1627]: time="2026-01-24T03:08:28.088555230Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 03:08:30.122457 containerd[1627]: time="2026-01-24T03:08:30.122391363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:30.124205 containerd[1627]: time="2026-01-24T03:08:30.123861838Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 24 03:08:30.126621 containerd[1627]: time="2026-01-24T03:08:30.125209065Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:30.129406 containerd[1627]: time="2026-01-24T03:08:30.129370912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:30.131182 containerd[1627]: time="2026-01-24T03:08:30.131138676Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.04227102s" Jan 24 03:08:30.131259 containerd[1627]: time="2026-01-24T03:08:30.131187400Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 03:08:30.132195 containerd[1627]: time="2026-01-24T03:08:30.132161424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 03:08:33.578120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1894555572.mount: Deactivated successfully. Jan 24 03:08:35.128763 containerd[1627]: time="2026-01-24T03:08:35.127645230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:35.128763 containerd[1627]: time="2026-01-24T03:08:35.128705073Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 24 03:08:35.129752 containerd[1627]: time="2026-01-24T03:08:35.129693764Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:35.132794 containerd[1627]: time="2026-01-24T03:08:35.132739297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:35.134097 containerd[1627]: time="2026-01-24T03:08:35.133912782Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 5.001706448s" Jan 24 03:08:35.134097 containerd[1627]: time="2026-01-24T03:08:35.133962000Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 03:08:35.135257 containerd[1627]: time="2026-01-24T03:08:35.135220171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 03:08:35.829332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3800734853.mount: Deactivated successfully. Jan 24 03:08:35.831041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 03:08:35.839814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:08:36.117922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:08:36.136180 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 03:08:36.259321 kubelet[2214]: E0124 03:08:36.259264 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 03:08:36.262821 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 03:08:36.263143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 03:08:36.305938 update_engine[1603]: I20260124 03:08:36.304733 1603 update_attempter.cc:509] Updating boot flags... Jan 24 03:08:36.450118 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2231) Jan 24 03:08:36.541645 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2230) Jan 24 03:08:37.752063 containerd[1627]: time="2026-01-24T03:08:37.752005455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:37.753695 containerd[1627]: time="2026-01-24T03:08:37.753652734Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 24 03:08:37.756629 containerd[1627]: time="2026-01-24T03:08:37.755008875Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:37.762302 containerd[1627]: time="2026-01-24T03:08:37.762212127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:37.764024 containerd[1627]: time="2026-01-24T03:08:37.763402113Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.628139775s" Jan 24 03:08:37.764024 containerd[1627]: time="2026-01-24T03:08:37.763455612Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 03:08:37.764356 containerd[1627]: time="2026-01-24T03:08:37.764325226Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 03:08:38.336560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674161770.mount: Deactivated successfully. Jan 24 03:08:38.350971 containerd[1627]: time="2026-01-24T03:08:38.350750783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 24 03:08:38.350971 containerd[1627]: time="2026-01-24T03:08:38.350882096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:38.354521 containerd[1627]: time="2026-01-24T03:08:38.354471449Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:38.355701 containerd[1627]: time="2026-01-24T03:08:38.355639326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:38.357188 containerd[1627]: time="2026-01-24T03:08:38.356953659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 591.327655ms" Jan 24 03:08:38.357188 containerd[1627]: time="2026-01-24T03:08:38.356997551Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 03:08:38.358225 containerd[1627]: time="2026-01-24T03:08:38.358177915Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 03:08:38.984636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801157000.mount: Deactivated successfully. Jan 24 03:08:41.882341 containerd[1627]: time="2026-01-24T03:08:41.882247686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:41.884657 containerd[1627]: time="2026-01-24T03:08:41.884261723Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 24 03:08:41.885938 containerd[1627]: time="2026-01-24T03:08:41.885875205Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:41.890441 containerd[1627]: time="2026-01-24T03:08:41.890407943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:08:41.892583 containerd[1627]: time="2026-01-24T03:08:41.892400816Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.53416677s" Jan 24 03:08:41.892583 containerd[1627]: time="2026-01-24T03:08:41.892444269Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 03:08:45.766478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:08:45.773897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:08:45.819351 systemd[1]: Reloading requested from client PID 2365 ('systemctl') (unit session-11.scope)... Jan 24 03:08:45.819583 systemd[1]: Reloading... Jan 24 03:08:46.045631 zram_generator::config[2400]: No configuration found. Jan 24 03:08:46.199159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 03:08:46.310536 systemd[1]: Reloading finished in 490 ms. Jan 24 03:08:46.380377 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 03:08:46.380758 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 03:08:46.381484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:08:46.389281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:08:46.546926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:08:46.561215 (kubelet)[2483]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 03:08:46.664347 kubelet[2483]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:08:46.666919 kubelet[2483]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 03:08:46.666919 kubelet[2483]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:08:46.667223 kubelet[2483]: I0124 03:08:46.667160 2483 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 03:08:47.539124 kubelet[2483]: I0124 03:08:47.539059 2483 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 03:08:47.539124 kubelet[2483]: I0124 03:08:47.539115 2483 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 03:08:47.540622 kubelet[2483]: I0124 03:08:47.539711 2483 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 03:08:47.578224 kubelet[2483]: E0124 03:08:47.578146 2483 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.26.234:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.26.234:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:08:47.584656 kubelet[2483]: I0124 03:08:47.584291 2483 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 03:08:47.611111 kubelet[2483]: E0124 03:08:47.611041 2483 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 03:08:47.611111 kubelet[2483]: I0124 03:08:47.611127 2483 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 03:08:47.619815 kubelet[2483]: I0124 03:08:47.619777 2483 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 03:08:47.624579 kubelet[2483]: I0124 03:08:47.624448 2483 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 03:08:47.624991 kubelet[2483]: I0124 03:08:47.624532 2483 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jddbi.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 03:08:47.626990 kubelet[2483]: I0124 03:08:47.626940 2483 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 03:08:47.626990 kubelet[2483]: I0124 03:08:47.626977 2483 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 03:08:47.628438 kubelet[2483]: I0124 03:08:47.628384 2483 state_mem.go:36] "Initialized new in-memory state store" Jan 24 03:08:47.632626 kubelet[2483]: I0124 03:08:47.632536 2483 kubelet.go:446] "Attempting to sync node with API server" Jan 24 03:08:47.632741 kubelet[2483]: I0124 03:08:47.632655 2483 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 03:08:47.632805 kubelet[2483]: I0124 03:08:47.632764 2483 kubelet.go:352] "Adding apiserver pod source" Jan 24 03:08:47.632871 kubelet[2483]: I0124 03:08:47.632810 2483 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 03:08:47.642445 kubelet[2483]: W0124 03:08:47.641888 2483 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.26.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jddbi.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.26.234:6443: connect: connection refused Jan 24 03:08:47.643093 kubelet[2483]: E0124 03:08:47.642832 2483 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.26.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jddbi.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.26.234:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:08:47.644631 kubelet[2483]: I0124 03:08:47.644404 2483 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 03:08:47.650103 kubelet[2483]: I0124 03:08:47.649815 2483 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 03:08:47.651628 kubelet[2483]: W0124 03:08:47.650772 2483 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 03:08:47.652143 kubelet[2483]: W0124 03:08:47.652084 2483 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.26.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.26.234:6443: connect: connection refused Jan 24 03:08:47.652240 kubelet[2483]: E0124 03:08:47.652161 2483 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.26.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.26.234:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:08:47.653939 kubelet[2483]: I0124 03:08:47.653916 2483 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 03:08:47.654105 kubelet[2483]: I0124 03:08:47.654086 2483 server.go:1287] "Started kubelet" Jan 24 03:08:47.658755 kubelet[2483]: I0124 03:08:47.658657 2483 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 03:08:47.660880 kubelet[2483]: I0124 03:08:47.660857 2483 server.go:479] "Adding debug handlers to kubelet server" Jan 24 03:08:47.663719 kubelet[2483]: I0124 03:08:47.663575 2483 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 03:08:47.664270 kubelet[2483]: I0124 03:08:47.664243 2483 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 03:08:47.664532 kubelet[2483]: I0124 03:08:47.664490 2483 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 03:08:47.669649 kubelet[2483]: E0124 03:08:47.667752 2483 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.26.234:6443/api/v1/namespaces/default/events\": dial tcp 10.244.26.234:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-jddbi.gb1.brightbox.com.188d8bf868de4b75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-jddbi.gb1.brightbox.com,UID:srv-jddbi.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-jddbi.gb1.brightbox.com,},FirstTimestamp:2026-01-24 03:08:47.654022005 +0000 UTC m=+1.086538682,LastTimestamp:2026-01-24 03:08:47.654022005 +0000 UTC m=+1.086538682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-jddbi.gb1.brightbox.com,}" Jan 24 03:08:47.670310 kubelet[2483]: I0124 03:08:47.670277 2483 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 03:08:47.677841 kubelet[2483]: I0124 03:08:47.677808 2483 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 03:08:47.678499 kubelet[2483]: E0124 03:08:47.678468 2483 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-jddbi.gb1.brightbox.com\" not found" Jan 24 03:08:47.679340 kubelet[2483]: I0124 03:08:47.679314 2483 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 03:08:47.679695 kubelet[2483]: I0124 03:08:47.679674 2483 reconciler.go:26] "Reconciler: start to sync state" Jan 24 03:08:47.680395 kubelet[2483]: W0124 03:08:47.680344 2483 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.26.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.26.234:6443: connect: connection refused Jan 24 03:08:47.680540 kubelet[2483]: E0124 03:08:47.680506 2483 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.26.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.26.234:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:08:47.680898 kubelet[2483]: E0124 03:08:47.680852 2483 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jddbi.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.234:6443: connect: connection refused" interval="200ms" Jan 24 03:08:47.682053 kubelet[2483]: I0124 03:08:47.681651 2483 factory.go:221] Registration of the systemd container factory successfully Jan 24 03:08:47.682053 kubelet[2483]: I0124 03:08:47.681811 2483 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 03:08:47.684931 kubelet[2483]: E0124 03:08:47.684908 2483 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 03:08:47.685540 kubelet[2483]: I0124 03:08:47.685518 2483 factory.go:221] Registration of the containerd container factory successfully Jan 24 03:08:47.699641 kubelet[2483]: I0124 03:08:47.697362 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 03:08:47.699641 kubelet[2483]: I0124 03:08:47.698895 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 03:08:47.699641 kubelet[2483]: I0124 03:08:47.698982 2483 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 03:08:47.699641 kubelet[2483]: I0124 03:08:47.699084 2483 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 03:08:47.699641 kubelet[2483]: I0124 03:08:47.699110 2483 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 03:08:47.699641 kubelet[2483]: E0124 03:08:47.699243 2483 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 03:08:47.732277 kubelet[2483]: W0124 03:08:47.732193 2483 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.26.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.26.234:6443: connect: connection refused Jan 24 03:08:47.732650 kubelet[2483]: E0124 03:08:47.732615 2483 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.26.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.26.234:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:08:47.748084 kubelet[2483]: I0124 03:08:47.748042 2483 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 03:08:47.748084 kubelet[2483]: I0124 03:08:47.748076 2483 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 03:08:47.748349 kubelet[2483]: I0124 03:08:47.748133 2483 state_mem.go:36] "Initialized new in-memory state store" Jan 24 03:08:47.750047 kubelet[2483]: I0124 03:08:47.749995 2483 policy_none.go:49] "None policy: Start" Jan 24 03:08:47.750155 kubelet[2483]: I0124 03:08:47.750069 2483 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 03:08:47.750155 kubelet[2483]: I0124 03:08:47.750124 2483 state_mem.go:35] "Initializing new in-memory state store" Jan 24 03:08:47.760266 kubelet[2483]: I0124 03:08:47.758363 2483 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 03:08:47.760266 kubelet[2483]: I0124 03:08:47.758815 2483 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 03:08:47.760266 kubelet[2483]: I0124 03:08:47.758868 2483 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 03:08:47.761481 kubelet[2483]: I0124 03:08:47.761420 2483 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 03:08:47.771025 kubelet[2483]: E0124 03:08:47.770691 2483 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 03:08:47.771025 kubelet[2483]: E0124 03:08:47.770891 2483 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-jddbi.gb1.brightbox.com\" not found" Jan 24 03:08:47.813656 kubelet[2483]: E0124 03:08:47.811293 2483 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jddbi.gb1.brightbox.com\" not found" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.816798 kubelet[2483]: E0124 03:08:47.816760 2483 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jddbi.gb1.brightbox.com\" not found" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.820200 kubelet[2483]: E0124 03:08:47.820143 2483 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jddbi.gb1.brightbox.com\" not found" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.863478 kubelet[2483]: I0124 03:08:47.863424 2483 kubelet_node_status.go:75] "Attempting to register node" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.864228 kubelet[2483]: E0124 03:08:47.864166 2483 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.26.234:6443/api/v1/nodes\": dial tcp 10.244.26.234:6443: connect: connection refused" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.881757 kubelet[2483]: I0124 03:08:47.881267 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bc4429e02a06002d54dbbb184d4749b-k8s-certs\") pod \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" (UID: \"4bc4429e02a06002d54dbbb184d4749b\") " pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.881757 kubelet[2483]: I0124 03:08:47.881332 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4bc4429e02a06002d54dbbb184d4749b-kubeconfig\") pod \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" (UID: \"4bc4429e02a06002d54dbbb184d4749b\") " pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.881757 kubelet[2483]: I0124 03:08:47.881373 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fead7718bf5599f1b4c4d67c2371bb5-kubeconfig\") pod \"kube-scheduler-srv-jddbi.gb1.brightbox.com\" (UID: \"7fead7718bf5599f1b4c4d67c2371bb5\") " pod="kube-system/kube-scheduler-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.881757 kubelet[2483]: I0124 03:08:47.881409 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82461c6dfc56db4352a488c0326c7db9-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jddbi.gb1.brightbox.com\" (UID: \"82461c6dfc56db4352a488c0326c7db9\") " pod="kube-system/kube-apiserver-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.881757 kubelet[2483]: I0124 03:08:47.881445 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4bc4429e02a06002d54dbbb184d4749b-flexvolume-dir\") pod \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" (UID: \"4bc4429e02a06002d54dbbb184d4749b\") " pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.882205 kubelet[2483]: I0124 03:08:47.881477 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bc4429e02a06002d54dbbb184d4749b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" (UID: \"4bc4429e02a06002d54dbbb184d4749b\") " pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.882205 kubelet[2483]: I0124 03:08:47.881510 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82461c6dfc56db4352a488c0326c7db9-ca-certs\") pod \"kube-apiserver-srv-jddbi.gb1.brightbox.com\" (UID: \"82461c6dfc56db4352a488c0326c7db9\") " pod="kube-system/kube-apiserver-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.882205 kubelet[2483]: I0124 03:08:47.881541 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82461c6dfc56db4352a488c0326c7db9-k8s-certs\") pod \"kube-apiserver-srv-jddbi.gb1.brightbox.com\" (UID: \"82461c6dfc56db4352a488c0326c7db9\") " pod="kube-system/kube-apiserver-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.882205 kubelet[2483]: I0124 03:08:47.881590 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bc4429e02a06002d54dbbb184d4749b-ca-certs\") pod \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" (UID: \"4bc4429e02a06002d54dbbb184d4749b\") " pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:47.882205 kubelet[2483]: E0124 03:08:47.881902 2483 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jddbi.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.234:6443: connect: connection refused" interval="400ms" Jan 24 03:08:48.067993 kubelet[2483]: I0124 03:08:48.067845 2483 kubelet_node_status.go:75] "Attempting to register node" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:48.069160 kubelet[2483]: E0124 03:08:48.069070 2483 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.26.234:6443/api/v1/nodes\": dial tcp 10.244.26.234:6443: connect: connection refused" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:48.119434 containerd[1627]: time="2026-01-24T03:08:48.119330021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jddbi.gb1.brightbox.com,Uid:82461c6dfc56db4352a488c0326c7db9,Namespace:kube-system,Attempt:0,}" Jan 24 03:08:48.120120 containerd[1627]: time="2026-01-24T03:08:48.119331370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jddbi.gb1.brightbox.com,Uid:4bc4429e02a06002d54dbbb184d4749b,Namespace:kube-system,Attempt:0,}" Jan 24 03:08:48.123161 containerd[1627]: time="2026-01-24T03:08:48.123030266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jddbi.gb1.brightbox.com,Uid:7fead7718bf5599f1b4c4d67c2371bb5,Namespace:kube-system,Attempt:0,}" Jan 24 03:08:48.283012 kubelet[2483]: E0124 03:08:48.282880 2483 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jddbi.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.234:6443: connect: connection refused" interval="800ms" Jan 24 03:08:48.473638 kubelet[2483]: I0124 03:08:48.473253 2483 kubelet_node_status.go:75] "Attempting to register node" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:48.473815 kubelet[2483]: E0124 03:08:48.473721 2483 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.26.234:6443/api/v1/nodes\": dial tcp 10.244.26.234:6443: connect: connection refused" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:48.488832 kubelet[2483]: W0124 03:08:48.488756 2483 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.26.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.26.234:6443: connect: connection refused Jan 24 03:08:48.489099 kubelet[2483]: E0124 03:08:48.489045 2483 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.26.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.26.234:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:08:48.739221 kubelet[2483]: W0124 03:08:48.738881 2483 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.26.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jddbi.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.26.234:6443: connect: connection refused Jan 24 03:08:48.739221 kubelet[2483]: E0124 03:08:48.738963 2483 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.26.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jddbi.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.26.234:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:08:48.742323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174440360.mount: Deactivated successfully. Jan 24 03:08:48.760859 containerd[1627]: time="2026-01-24T03:08:48.760765470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 03:08:48.764876 containerd[1627]: time="2026-01-24T03:08:48.763727517Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 03:08:48.767379 containerd[1627]: time="2026-01-24T03:08:48.767306527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 24 03:08:48.768830 containerd[1627]: time="2026-01-24T03:08:48.768746151Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 03:08:48.770672 containerd[1627]: time="2026-01-24T03:08:48.770354326Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 03:08:48.772143 containerd[1627]: time="2026-01-24T03:08:48.772093630Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 03:08:48.773055 containerd[1627]: time="2026-01-24T03:08:48.772967693Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 03:08:48.775433 containerd[1627]: time="2026-01-24T03:08:48.775363238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 03:08:48.778135 containerd[1627]: time="2026-01-24T03:08:48.777736742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.781717ms" Jan 24 03:08:48.780440 containerd[1627]: time="2026-01-24T03:08:48.780390018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.794401ms" Jan 24 03:08:48.822807 containerd[1627]: time="2026-01-24T03:08:48.822718413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 702.489094ms" Jan 24 03:08:49.017887 containerd[1627]: time="2026-01-24T03:08:49.015354563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:08:49.017887 containerd[1627]: time="2026-01-24T03:08:49.015472523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:08:49.017887 containerd[1627]: time="2026-01-24T03:08:49.015498628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:08:49.017887 containerd[1627]: time="2026-01-24T03:08:49.015642611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:08:49.032472 containerd[1627]: time="2026-01-24T03:08:49.032172691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:08:49.033466 containerd[1627]: time="2026-01-24T03:08:49.033108411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:08:49.033466 containerd[1627]: time="2026-01-24T03:08:49.033209728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:08:49.033466 containerd[1627]: time="2026-01-24T03:08:49.033252270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:08:49.034124 containerd[1627]: time="2026-01-24T03:08:49.033819850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:08:49.034354 containerd[1627]: time="2026-01-24T03:08:49.034291008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:08:49.034412 containerd[1627]: time="2026-01-24T03:08:49.034369568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:08:49.034896 containerd[1627]: time="2026-01-24T03:08:49.034518013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:08:49.071586 kubelet[2483]: W0124 03:08:49.071422 2483 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.26.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.26.234:6443: connect: connection refused Jan 24 03:08:49.071586 kubelet[2483]: E0124 03:08:49.071527 2483 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.26.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.26.234:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:08:49.083582 kubelet[2483]: E0124 03:08:49.083515 2483 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.26.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jddbi.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.26.234:6443: connect: connection refused" interval="1.6s" Jan 24 03:08:49.282323 kubelet[2483]: I0124 03:08:49.282024 2483 kubelet_node_status.go:75] "Attempting to register node" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:49.292810 kubelet[2483]: E0124 03:08:49.291435 2483 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.26.234:6443/api/v1/nodes\": dial tcp 10.244.26.234:6443: connect: connection refused" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:49.292810 kubelet[2483]: W0124 03:08:49.292565 2483 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.26.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.26.234:6443: connect: connection refused Jan 24 03:08:49.292810 kubelet[2483]: E0124 03:08:49.292671 2483 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.26.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.26.234:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:08:49.407123 containerd[1627]: time="2026-01-24T03:08:49.406907882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jddbi.gb1.brightbox.com,Uid:82461c6dfc56db4352a488c0326c7db9,Namespace:kube-system,Attempt:0,} returns sandbox id \"646611bd979bddd8968100d265119b3b187e249285e914ac1361a0ad1665b75b\"" Jan 24 03:08:49.419629 containerd[1627]: time="2026-01-24T03:08:49.418542949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jddbi.gb1.brightbox.com,Uid:7fead7718bf5599f1b4c4d67c2371bb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0b9fcec6b063bddc91227665a2503e0d825faa61ea00592f9d2ecadbb61b9fd\"" Jan 24 03:08:49.421438 containerd[1627]: time="2026-01-24T03:08:49.421109260Z" level=info msg="CreateContainer within sandbox \"646611bd979bddd8968100d265119b3b187e249285e914ac1361a0ad1665b75b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 03:08:49.423363 containerd[1627]: time="2026-01-24T03:08:49.423244334Z" level=info msg="CreateContainer within sandbox \"d0b9fcec6b063bddc91227665a2503e0d825faa61ea00592f9d2ecadbb61b9fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 03:08:49.427845 containerd[1627]: time="2026-01-24T03:08:49.427233341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jddbi.gb1.brightbox.com,Uid:4bc4429e02a06002d54dbbb184d4749b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d29b389bfb20d841e2453674c2764dcf842cc91558e916a48c6784186c0594c\"" Jan 24 03:08:49.432250 containerd[1627]: time="2026-01-24T03:08:49.432216261Z" level=info msg="CreateContainer within sandbox \"9d29b389bfb20d841e2453674c2764dcf842cc91558e916a48c6784186c0594c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 03:08:49.464218 containerd[1627]: time="2026-01-24T03:08:49.464158872Z" level=info msg="CreateContainer within sandbox \"646611bd979bddd8968100d265119b3b187e249285e914ac1361a0ad1665b75b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"31a747dc3039b3c1c456af06c1dc90dd4783e20229720de050517f0d531a7497\"" Jan 24 03:08:49.465807 containerd[1627]: time="2026-01-24T03:08:49.465747933Z" level=info msg="StartContainer for \"31a747dc3039b3c1c456af06c1dc90dd4783e20229720de050517f0d531a7497\"" Jan 24 03:08:49.466736 containerd[1627]: time="2026-01-24T03:08:49.466700171Z" level=info msg="CreateContainer within sandbox \"d0b9fcec6b063bddc91227665a2503e0d825faa61ea00592f9d2ecadbb61b9fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df40be74b52a509b0fba48ba219860ad84ca85e8d90847eb5a9f116d55c0deb6\"" Jan 24 03:08:49.467430 containerd[1627]: time="2026-01-24T03:08:49.467399937Z" level=info msg="StartContainer for \"df40be74b52a509b0fba48ba219860ad84ca85e8d90847eb5a9f116d55c0deb6\"" Jan 24 03:08:49.472479 containerd[1627]: time="2026-01-24T03:08:49.472321740Z" level=info msg="CreateContainer within sandbox \"9d29b389bfb20d841e2453674c2764dcf842cc91558e916a48c6784186c0594c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"002753e84b4d180efc6d3e7be3e857e847a87c2fde9480e91a94cdf0d08cef2b\"" Jan 24 03:08:49.473621 containerd[1627]: time="2026-01-24T03:08:49.472816693Z" level=info msg="StartContainer for \"002753e84b4d180efc6d3e7be3e857e847a87c2fde9480e91a94cdf0d08cef2b\"" Jan 24 03:08:49.622152 containerd[1627]: time="2026-01-24T03:08:49.622013040Z" level=info msg="StartContainer for \"31a747dc3039b3c1c456af06c1dc90dd4783e20229720de050517f0d531a7497\" returns successfully" Jan 24 03:08:49.642214 kubelet[2483]: E0124 03:08:49.642062 2483 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.26.234:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.26.234:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:08:49.662846 containerd[1627]: time="2026-01-24T03:08:49.662520153Z" level=info msg="StartContainer for \"002753e84b4d180efc6d3e7be3e857e847a87c2fde9480e91a94cdf0d08cef2b\" returns successfully" Jan 24 03:08:49.690223 containerd[1627]: time="2026-01-24T03:08:49.690166807Z" level=info msg="StartContainer for \"df40be74b52a509b0fba48ba219860ad84ca85e8d90847eb5a9f116d55c0deb6\" returns successfully" Jan 24 03:08:49.764623 kubelet[2483]: E0124 03:08:49.762371 2483 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jddbi.gb1.brightbox.com\" not found" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:49.775001 kubelet[2483]: E0124 03:08:49.771866 2483 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jddbi.gb1.brightbox.com\" not found" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:49.792628 kubelet[2483]: E0124 03:08:49.791656 2483 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jddbi.gb1.brightbox.com\" not found" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:50.786640 kubelet[2483]: E0124 03:08:50.785076 2483 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jddbi.gb1.brightbox.com\" not found" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:50.786640 kubelet[2483]: E0124 03:08:50.785546 2483 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jddbi.gb1.brightbox.com\" not found" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:50.895255 kubelet[2483]: I0124 03:08:50.895215 2483 kubelet_node_status.go:75] "Attempting to register node" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:52.653359 kubelet[2483]: I0124 03:08:52.653120 2483 apiserver.go:52] "Watching apiserver" Jan 24 03:08:52.716035 kubelet[2483]: E0124 03:08:52.714476 2483 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-jddbi.gb1.brightbox.com\" not found" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:52.780426 kubelet[2483]: I0124 03:08:52.780104 2483 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 03:08:52.836748 kubelet[2483]: I0124 03:08:52.833935 2483 kubelet_node_status.go:78] "Successfully registered node" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:52.836748 kubelet[2483]: E0124 03:08:52.833992 2483 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-jddbi.gb1.brightbox.com\": node \"srv-jddbi.gb1.brightbox.com\" not found" Jan 24 03:08:52.880319 kubelet[2483]: I0124 03:08:52.879894 2483 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:52.891987 kubelet[2483]: E0124 03:08:52.891407 2483 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jddbi.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:52.891987 kubelet[2483]: I0124 03:08:52.891509 2483 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:52.895622 kubelet[2483]: E0124 03:08:52.893805 2483 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-jddbi.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:52.895622 kubelet[2483]: I0124 03:08:52.893839 2483 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:52.897922 kubelet[2483]: E0124 03:08:52.897888 2483 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:55.096690 systemd[1]: Reloading requested from client PID 2755 ('systemctl') (unit session-11.scope)... Jan 24 03:08:55.097246 systemd[1]: Reloading... Jan 24 03:08:55.220694 zram_generator::config[2800]: No configuration found. Jan 24 03:08:55.408867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 03:08:55.537322 systemd[1]: Reloading finished in 439 ms. Jan 24 03:08:55.588879 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:08:55.604382 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 03:08:55.604960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:08:55.619207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:08:55.839790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:08:55.856261 (kubelet)[2868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 03:08:55.979198 kubelet[2868]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:08:55.979198 kubelet[2868]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 03:08:55.979198 kubelet[2868]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:08:55.979813 kubelet[2868]: I0124 03:08:55.979276 2868 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 03:08:55.989721 kubelet[2868]: I0124 03:08:55.989670 2868 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 03:08:55.989721 kubelet[2868]: I0124 03:08:55.989709 2868 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 03:08:55.990320 kubelet[2868]: I0124 03:08:55.990281 2868 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 03:08:55.992220 kubelet[2868]: I0124 03:08:55.992185 2868 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 03:08:56.000613 kubelet[2868]: I0124 03:08:56.000558 2868 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 03:08:56.011668 kubelet[2868]: E0124 03:08:56.010684 2868 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 03:08:56.011668 kubelet[2868]: I0124 03:08:56.010726 2868 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 03:08:56.020032 kubelet[2868]: I0124 03:08:56.019973 2868 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 03:08:56.021683 kubelet[2868]: I0124 03:08:56.021589 2868 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 03:08:56.021946 kubelet[2868]: I0124 03:08:56.021671 2868 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jddbi.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 03:08:56.022108 kubelet[2868]: I0124 03:08:56.021967 2868 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 03:08:56.022108 kubelet[2868]: I0124 03:08:56.021987 2868 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 03:08:56.024574 kubelet[2868]: I0124 03:08:56.024536 2868 state_mem.go:36] "Initialized new in-memory state store" Jan 24 03:08:56.024881 kubelet[2868]: I0124 03:08:56.024854 2868 kubelet.go:446] "Attempting to sync node with API server" Jan 24 03:08:56.024981 kubelet[2868]: I0124 03:08:56.024904 2868 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 03:08:56.024981 kubelet[2868]: I0124 03:08:56.024945 2868 kubelet.go:352] "Adding apiserver pod source" Jan 24 03:08:56.024981 kubelet[2868]: I0124 03:08:56.024968 2868 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 03:08:56.032617 kubelet[2868]: I0124 03:08:56.030119 2868 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 03:08:56.035816 kubelet[2868]: I0124 03:08:56.034490 2868 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 03:08:56.047431 kubelet[2868]: I0124 03:08:56.047398 2868 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 03:08:56.048587 kubelet[2868]: I0124 03:08:56.047670 2868 server.go:1287] "Started kubelet" Jan 24 03:08:56.049350 kubelet[2868]: I0124 03:08:56.049263 2868 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 03:08:56.052619 kubelet[2868]: I0124 03:08:56.051370 2868 server.go:479] "Adding debug handlers to kubelet server" Jan 24 03:08:56.052619 kubelet[2868]: I0124 03:08:56.051913 2868 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 03:08:56.053490 kubelet[2868]: I0124 03:08:56.053467 2868 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 03:08:56.057966 kubelet[2868]: I0124 03:08:56.057300 2868 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 03:08:56.071292 kubelet[2868]: I0124 03:08:56.070297 2868 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 03:08:56.072343 kubelet[2868]: I0124 03:08:56.071772 2868 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 03:08:56.074019 kubelet[2868]: I0124 03:08:56.073856 2868 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 03:08:56.074539 kubelet[2868]: I0124 03:08:56.074511 2868 reconciler.go:26] "Reconciler: start to sync state" Jan 24 03:08:56.075455 kubelet[2868]: E0124 03:08:56.075114 2868 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 03:08:56.082332 kubelet[2868]: I0124 03:08:56.082282 2868 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 03:08:56.088083 kubelet[2868]: I0124 03:08:56.087952 2868 factory.go:221] Registration of the containerd container factory successfully Jan 24 03:08:56.088083 kubelet[2868]: I0124 03:08:56.088008 2868 factory.go:221] Registration of the systemd container factory successfully Jan 24 03:08:56.101731 kubelet[2868]: I0124 03:08:56.101377 2868 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 03:08:56.121714 kubelet[2868]: I0124 03:08:56.121578 2868 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 03:08:56.121714 kubelet[2868]: I0124 03:08:56.121679 2868 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 03:08:56.121714 kubelet[2868]: I0124 03:08:56.121723 2868 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 03:08:56.121962 kubelet[2868]: I0124 03:08:56.121736 2868 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 03:08:56.121962 kubelet[2868]: E0124 03:08:56.121816 2868 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 03:08:56.212275 kubelet[2868]: I0124 03:08:56.212237 2868 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 03:08:56.212275 kubelet[2868]: I0124 03:08:56.212265 2868 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 03:08:56.212275 kubelet[2868]: I0124 03:08:56.212300 2868 state_mem.go:36] "Initialized new in-memory state store" Jan 24 03:08:56.213170 kubelet[2868]: I0124 03:08:56.213141 2868 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 03:08:56.213246 kubelet[2868]: I0124 03:08:56.213173 2868 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 03:08:56.213246 kubelet[2868]: I0124 03:08:56.213239 2868 policy_none.go:49] "None policy: Start" Jan 24 03:08:56.213365 kubelet[2868]: I0124 03:08:56.213257 2868 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 03:08:56.213365 kubelet[2868]: I0124 03:08:56.213311 2868 state_mem.go:35] "Initializing new in-memory state store" Jan 24 03:08:56.213544 kubelet[2868]: I0124 03:08:56.213519 2868 state_mem.go:75] "Updated machine memory state" Jan 24 03:08:56.218816 kubelet[2868]: I0124 03:08:56.218766 2868 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 03:08:56.219141 kubelet[2868]: I0124 03:08:56.219089 2868 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 03:08:56.219258 kubelet[2868]: I0124 03:08:56.219122 2868 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 03:08:56.227657 kubelet[2868]: I0124 03:08:56.226154 2868 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 03:08:56.230547 kubelet[2868]: I0124 03:08:56.230450 2868 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.233449 kubelet[2868]: I0124 03:08:56.230971 2868 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.233449 kubelet[2868]: I0124 03:08:56.231374 2868 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.241178 kubelet[2868]: E0124 03:08:56.237668 2868 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 03:08:56.245945 kubelet[2868]: W0124 03:08:56.245242 2868 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 03:08:56.248657 kubelet[2868]: W0124 03:08:56.246597 2868 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 03:08:56.249678 kubelet[2868]: W0124 03:08:56.248035 2868 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 03:08:56.276685 kubelet[2868]: I0124 03:08:56.276277 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fead7718bf5599f1b4c4d67c2371bb5-kubeconfig\") pod \"kube-scheduler-srv-jddbi.gb1.brightbox.com\" (UID: \"7fead7718bf5599f1b4c4d67c2371bb5\") " pod="kube-system/kube-scheduler-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.276685 kubelet[2868]: I0124 03:08:56.276338 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82461c6dfc56db4352a488c0326c7db9-k8s-certs\") pod \"kube-apiserver-srv-jddbi.gb1.brightbox.com\" (UID: \"82461c6dfc56db4352a488c0326c7db9\") " pod="kube-system/kube-apiserver-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.276685 kubelet[2868]: I0124 03:08:56.276369 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82461c6dfc56db4352a488c0326c7db9-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jddbi.gb1.brightbox.com\" (UID: \"82461c6dfc56db4352a488c0326c7db9\") " pod="kube-system/kube-apiserver-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.276685 kubelet[2868]: I0124 03:08:56.276414 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bc4429e02a06002d54dbbb184d4749b-ca-certs\") pod \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" (UID: \"4bc4429e02a06002d54dbbb184d4749b\") " pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.276685 kubelet[2868]: I0124 03:08:56.276445 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bc4429e02a06002d54dbbb184d4749b-k8s-certs\") pod \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" (UID: \"4bc4429e02a06002d54dbbb184d4749b\") " pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.278409 kubelet[2868]: I0124 03:08:56.276474 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4bc4429e02a06002d54dbbb184d4749b-kubeconfig\") pod \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" (UID: \"4bc4429e02a06002d54dbbb184d4749b\") " pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.278409 kubelet[2868]: I0124 03:08:56.276508 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bc4429e02a06002d54dbbb184d4749b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" (UID: \"4bc4429e02a06002d54dbbb184d4749b\") " pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.278409 kubelet[2868]: I0124 03:08:56.277506 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82461c6dfc56db4352a488c0326c7db9-ca-certs\") pod \"kube-apiserver-srv-jddbi.gb1.brightbox.com\" (UID: \"82461c6dfc56db4352a488c0326c7db9\") " pod="kube-system/kube-apiserver-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.278409 kubelet[2868]: I0124 03:08:56.277540 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4bc4429e02a06002d54dbbb184d4749b-flexvolume-dir\") pod \"kube-controller-manager-srv-jddbi.gb1.brightbox.com\" (UID: \"4bc4429e02a06002d54dbbb184d4749b\") " pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.356812 kubelet[2868]: I0124 03:08:56.354360 2868 kubelet_node_status.go:75] "Attempting to register node" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.371319 kubelet[2868]: I0124 03:08:56.371082 2868 kubelet_node_status.go:124] "Node was previously registered" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:56.371319 kubelet[2868]: I0124 03:08:56.371218 2868 kubelet_node_status.go:78] "Successfully registered node" node="srv-jddbi.gb1.brightbox.com" Jan 24 03:08:57.026633 kubelet[2868]: I0124 03:08:57.025551 2868 apiserver.go:52] "Watching apiserver" Jan 24 03:08:57.075095 kubelet[2868]: I0124 03:08:57.075033 2868 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 03:08:57.113583 kubelet[2868]: I0124 03:08:57.113445 2868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-jddbi.gb1.brightbox.com" podStartSLOduration=1.113360814 podStartE2EDuration="1.113360814s" podCreationTimestamp="2026-01-24 03:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:08:57.11295793 +0000 UTC m=+1.212205158" watchObservedRunningTime="2026-01-24 03:08:57.113360814 +0000 UTC m=+1.212608008" Jan 24 03:08:57.113858 kubelet[2868]: I0124 03:08:57.113721 2868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-jddbi.gb1.brightbox.com" podStartSLOduration=1.113711708 podStartE2EDuration="1.113711708s" podCreationTimestamp="2026-01-24 03:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:08:57.09644089 +0000 UTC m=+1.195688104" watchObservedRunningTime="2026-01-24 03:08:57.113711708 +0000 UTC m=+1.212958896" Jan 24 03:08:57.153261 kubelet[2868]: I0124 03:08:57.153220 2868 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jddbi.gb1.brightbox.com" Jan 24 03:08:57.171973 kubelet[2868]: I0124 03:08:57.171544 2868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-jddbi.gb1.brightbox.com" podStartSLOduration=1.171523801 podStartE2EDuration="1.171523801s" podCreationTimestamp="2026-01-24 03:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:08:57.137653896 +0000 UTC m=+1.236901098" watchObservedRunningTime="2026-01-24 03:08:57.171523801 +0000 UTC m=+1.270770998" Jan 24 03:08:57.181615 kubelet[2868]: W0124 03:08:57.180536 2868 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 03:08:57.181615 kubelet[2868]: E0124 03:08:57.180626 2868 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jddbi.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-jddbi.gb1.brightbox.com" Jan 24 03:09:01.242179 kubelet[2868]: I0124 03:09:01.242053 2868 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 03:09:01.243386 kubelet[2868]: I0124 03:09:01.243303 2868 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 03:09:01.243458 containerd[1627]: time="2026-01-24T03:09:01.243075993Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 03:09:02.182621 kubelet[2868]: W0124 03:09:02.178364 2868 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-jddbi.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'srv-jddbi.gb1.brightbox.com' and this object Jan 24 03:09:02.182621 kubelet[2868]: E0124 03:09:02.178737 2868 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-jddbi.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-jddbi.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 24 03:09:02.182621 kubelet[2868]: I0124 03:09:02.178834 2868 status_manager.go:890] "Failed to get status for pod" podUID="402b2d75-45a9-4eab-ba05-eb69deec0424" pod="tigera-operator/tigera-operator-7dcd859c48-kflll" err="pods \"tigera-operator-7dcd859c48-kflll\" is forbidden: User \"system:node:srv-jddbi.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-jddbi.gb1.brightbox.com' and this object" Jan 24 03:09:02.182621 kubelet[2868]: W0124 03:09:02.180680 2868 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:srv-jddbi.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'srv-jddbi.gb1.brightbox.com' and this object Jan 24 03:09:02.182975 kubelet[2868]: E0124 03:09:02.180711 2868 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:srv-jddbi.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-jddbi.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 24 03:09:02.235651 kubelet[2868]: I0124 03:09:02.235282 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eaeb7564-6358-4dea-af94-770fe568ed21-lib-modules\") pod \"kube-proxy-9l67k\" (UID: \"eaeb7564-6358-4dea-af94-770fe568ed21\") " pod="kube-system/kube-proxy-9l67k" Jan 24 03:09:02.235651 kubelet[2868]: I0124 03:09:02.235338 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5xdp\" (UniqueName: \"kubernetes.io/projected/eaeb7564-6358-4dea-af94-770fe568ed21-kube-api-access-c5xdp\") pod \"kube-proxy-9l67k\" (UID: \"eaeb7564-6358-4dea-af94-770fe568ed21\") " pod="kube-system/kube-proxy-9l67k" Jan 24 03:09:02.235651 kubelet[2868]: I0124 03:09:02.235377 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eaeb7564-6358-4dea-af94-770fe568ed21-kube-proxy\") pod \"kube-proxy-9l67k\" (UID: \"eaeb7564-6358-4dea-af94-770fe568ed21\") " pod="kube-system/kube-proxy-9l67k" Jan 24 03:09:02.235651 kubelet[2868]: I0124 03:09:02.235405 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eaeb7564-6358-4dea-af94-770fe568ed21-xtables-lock\") pod \"kube-proxy-9l67k\" (UID: \"eaeb7564-6358-4dea-af94-770fe568ed21\") " pod="kube-system/kube-proxy-9l67k" Jan 24 03:09:02.235651 kubelet[2868]: I0124 03:09:02.235434 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/402b2d75-45a9-4eab-ba05-eb69deec0424-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kflll\" (UID: \"402b2d75-45a9-4eab-ba05-eb69deec0424\") " pod="tigera-operator/tigera-operator-7dcd859c48-kflll" Jan 24 03:09:02.236147 kubelet[2868]: I0124 03:09:02.235464 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7qj\" (UniqueName: \"kubernetes.io/projected/402b2d75-45a9-4eab-ba05-eb69deec0424-kube-api-access-pw7qj\") pod \"tigera-operator-7dcd859c48-kflll\" (UID: \"402b2d75-45a9-4eab-ba05-eb69deec0424\") " pod="tigera-operator/tigera-operator-7dcd859c48-kflll" Jan 24 03:09:02.529711 containerd[1627]: time="2026-01-24T03:09:02.529048101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9l67k,Uid:eaeb7564-6358-4dea-af94-770fe568ed21,Namespace:kube-system,Attempt:0,}" Jan 24 03:09:02.577615 containerd[1627]: time="2026-01-24T03:09:02.577419630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:02.577615 containerd[1627]: time="2026-01-24T03:09:02.577500525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:02.577615 containerd[1627]: time="2026-01-24T03:09:02.577525906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:02.578266 containerd[1627]: time="2026-01-24T03:09:02.577834329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:02.647276 containerd[1627]: time="2026-01-24T03:09:02.647064933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9l67k,Uid:eaeb7564-6358-4dea-af94-770fe568ed21,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe8141185fa2d76d976238e5aedfc35e64494969b40f56d0536bf7742b95590b\"" Jan 24 03:09:02.656002 containerd[1627]: time="2026-01-24T03:09:02.655949102Z" level=info msg="CreateContainer within sandbox \"fe8141185fa2d76d976238e5aedfc35e64494969b40f56d0536bf7742b95590b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 03:09:02.679764 containerd[1627]: time="2026-01-24T03:09:02.679693407Z" level=info msg="CreateContainer within sandbox \"fe8141185fa2d76d976238e5aedfc35e64494969b40f56d0536bf7742b95590b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2254c807adf11c7afbe4c72daeb31a8b3a137bab14ae5fc3dad0524e17dfaed4\"" Jan 24 03:09:02.682515 containerd[1627]: time="2026-01-24T03:09:02.680814314Z" level=info msg="StartContainer for \"2254c807adf11c7afbe4c72daeb31a8b3a137bab14ae5fc3dad0524e17dfaed4\"" Jan 24 03:09:02.784760 containerd[1627]: time="2026-01-24T03:09:02.784329446Z" level=info msg="StartContainer for \"2254c807adf11c7afbe4c72daeb31a8b3a137bab14ae5fc3dad0524e17dfaed4\" returns successfully" Jan 24 03:09:03.202148 kubelet[2868]: I0124 03:09:03.201975 2868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9l67k" podStartSLOduration=1.2019525309999999 podStartE2EDuration="1.201952531s" podCreationTimestamp="2026-01-24 03:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:09:03.200242721 +0000 UTC m=+7.299489935" watchObservedRunningTime="2026-01-24 03:09:03.201952531 +0000 UTC m=+7.301199723" Jan 24 03:09:03.349722 kubelet[2868]: E0124 03:09:03.349337 2868 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 03:09:03.349722 kubelet[2868]: E0124 03:09:03.349458 2868 projected.go:194] Error preparing data for projected volume kube-api-access-pw7qj for pod tigera-operator/tigera-operator-7dcd859c48-kflll: failed to sync configmap cache: timed out waiting for the condition Jan 24 03:09:03.349722 kubelet[2868]: E0124 03:09:03.349710 2868 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/402b2d75-45a9-4eab-ba05-eb69deec0424-kube-api-access-pw7qj podName:402b2d75-45a9-4eab-ba05-eb69deec0424 nodeName:}" failed. No retries permitted until 2026-01-24 03:09:03.849616468 +0000 UTC m=+7.948863662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pw7qj" (UniqueName: "kubernetes.io/projected/402b2d75-45a9-4eab-ba05-eb69deec0424-kube-api-access-pw7qj") pod "tigera-operator-7dcd859c48-kflll" (UID: "402b2d75-45a9-4eab-ba05-eb69deec0424") : failed to sync configmap cache: timed out waiting for the condition Jan 24 03:09:03.979227 containerd[1627]: time="2026-01-24T03:09:03.978579530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kflll,Uid:402b2d75-45a9-4eab-ba05-eb69deec0424,Namespace:tigera-operator,Attempt:0,}" Jan 24 03:09:04.031752 containerd[1627]: time="2026-01-24T03:09:04.031230607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:04.031752 containerd[1627]: time="2026-01-24T03:09:04.031327131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:04.031752 containerd[1627]: time="2026-01-24T03:09:04.031351584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:04.032798 containerd[1627]: time="2026-01-24T03:09:04.032679626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:04.135432 containerd[1627]: time="2026-01-24T03:09:04.135317573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kflll,Uid:402b2d75-45a9-4eab-ba05-eb69deec0424,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f8aab97b770c0f33fb985dc4d163aba013acec7a1e0e1dcb46752a276fdfe957\"" Jan 24 03:09:04.141244 containerd[1627]: time="2026-01-24T03:09:04.139575180Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 03:09:04.355366 systemd[1]: run-containerd-runc-k8s.io-f8aab97b770c0f33fb985dc4d163aba013acec7a1e0e1dcb46752a276fdfe957-runc.wUBQNw.mount: Deactivated successfully. Jan 24 03:09:06.080903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576452317.mount: Deactivated successfully. Jan 24 03:09:07.323011 containerd[1627]: time="2026-01-24T03:09:07.322902585Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:07.325127 containerd[1627]: time="2026-01-24T03:09:07.324932720Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 03:09:07.328239 containerd[1627]: time="2026-01-24T03:09:07.328195195Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:07.331625 containerd[1627]: time="2026-01-24T03:09:07.331358434Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:07.332745 containerd[1627]: time="2026-01-24T03:09:07.332702498Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.193052798s" Jan 24 03:09:07.332848 containerd[1627]: time="2026-01-24T03:09:07.332764360Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 03:09:07.337987 containerd[1627]: time="2026-01-24T03:09:07.337937153Z" level=info msg="CreateContainer within sandbox \"f8aab97b770c0f33fb985dc4d163aba013acec7a1e0e1dcb46752a276fdfe957\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 03:09:07.354385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385585851.mount: Deactivated successfully. Jan 24 03:09:07.359129 containerd[1627]: time="2026-01-24T03:09:07.358707943Z" level=info msg="CreateContainer within sandbox \"f8aab97b770c0f33fb985dc4d163aba013acec7a1e0e1dcb46752a276fdfe957\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"50e74b1cb9b9c46d265127b1d4ffd82fd7348aba0081a459014f671c048967a5\"" Jan 24 03:09:07.362365 containerd[1627]: time="2026-01-24T03:09:07.360922179Z" level=info msg="StartContainer for \"50e74b1cb9b9c46d265127b1d4ffd82fd7348aba0081a459014f671c048967a5\"" Jan 24 03:09:07.458224 containerd[1627]: time="2026-01-24T03:09:07.458155999Z" level=info msg="StartContainer for \"50e74b1cb9b9c46d265127b1d4ffd82fd7348aba0081a459014f671c048967a5\" returns successfully" Jan 24 03:09:13.667187 sudo[1914]: pam_unix(sudo:session): session closed for user root Jan 24 03:09:13.767880 sshd[1910]: pam_unix(sshd:session): session closed for user core Jan 24 03:09:13.776317 systemd-logind[1597]: Session 11 logged out. Waiting for processes to exit. Jan 24 03:09:13.778308 systemd[1]: sshd@8-10.244.26.234:22-20.161.92.111:53704.service: Deactivated successfully. Jan 24 03:09:13.791340 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 03:09:13.802049 systemd-logind[1597]: Removed session 11. Jan 24 03:09:21.299965 kubelet[2868]: I0124 03:09:21.299557 2868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kflll" podStartSLOduration=16.102348577 podStartE2EDuration="19.299501331s" podCreationTimestamp="2026-01-24 03:09:02 +0000 UTC" firstStartedPulling="2026-01-24 03:09:04.137368747 +0000 UTC m=+8.236615935" lastFinishedPulling="2026-01-24 03:09:07.3345215 +0000 UTC m=+11.433768689" observedRunningTime="2026-01-24 03:09:08.210068908 +0000 UTC m=+12.309316107" watchObservedRunningTime="2026-01-24 03:09:21.299501331 +0000 UTC m=+25.398748524" Jan 24 03:09:21.388657 kubelet[2868]: I0124 03:09:21.387929 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a429c078-694a-464b-9809-de7d403b2980-typha-certs\") pod \"calico-typha-7b4f6f445-dhxxq\" (UID: \"a429c078-694a-464b-9809-de7d403b2980\") " pod="calico-system/calico-typha-7b4f6f445-dhxxq" Jan 24 03:09:21.388657 kubelet[2868]: I0124 03:09:21.388009 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a429c078-694a-464b-9809-de7d403b2980-tigera-ca-bundle\") pod \"calico-typha-7b4f6f445-dhxxq\" (UID: \"a429c078-694a-464b-9809-de7d403b2980\") " pod="calico-system/calico-typha-7b4f6f445-dhxxq" Jan 24 03:09:21.388657 kubelet[2868]: I0124 03:09:21.388145 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpw7s\" (UniqueName: \"kubernetes.io/projected/a429c078-694a-464b-9809-de7d403b2980-kube-api-access-lpw7s\") pod \"calico-typha-7b4f6f445-dhxxq\" (UID: \"a429c078-694a-464b-9809-de7d403b2980\") " pod="calico-system/calico-typha-7b4f6f445-dhxxq" Jan 24 03:09:21.591031 kubelet[2868]: I0124 03:09:21.589773 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5ea4b2c9-bfc3-43c5-8167-05971c627092-cni-log-dir\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591031 kubelet[2868]: I0124 03:09:21.589851 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ea4b2c9-bfc3-43c5-8167-05971c627092-lib-modules\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591031 kubelet[2868]: I0124 03:09:21.589887 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ea4b2c9-bfc3-43c5-8167-05971c627092-tigera-ca-bundle\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591031 kubelet[2868]: I0124 03:09:21.589918 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5ea4b2c9-bfc3-43c5-8167-05971c627092-flexvol-driver-host\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591031 kubelet[2868]: I0124 03:09:21.589959 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ea4b2c9-bfc3-43c5-8167-05971c627092-var-lib-calico\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591447 kubelet[2868]: I0124 03:09:21.589991 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5ea4b2c9-bfc3-43c5-8167-05971c627092-node-certs\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591447 kubelet[2868]: I0124 03:09:21.590021 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ea4b2c9-bfc3-43c5-8167-05971c627092-policysync\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591447 kubelet[2868]: I0124 03:09:21.590047 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ea4b2c9-bfc3-43c5-8167-05971c627092-var-run-calico\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591447 kubelet[2868]: I0124 03:09:21.590074 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x2pq\" (UniqueName: \"kubernetes.io/projected/5ea4b2c9-bfc3-43c5-8167-05971c627092-kube-api-access-4x2pq\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591447 kubelet[2868]: I0124 03:09:21.590104 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ea4b2c9-bfc3-43c5-8167-05971c627092-cni-bin-dir\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591743 kubelet[2868]: I0124 03:09:21.590253 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ea4b2c9-bfc3-43c5-8167-05971c627092-xtables-lock\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.591743 kubelet[2868]: I0124 03:09:21.590307 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ea4b2c9-bfc3-43c5-8167-05971c627092-cni-net-dir\") pod \"calico-node-2xz9q\" (UID: \"5ea4b2c9-bfc3-43c5-8167-05971c627092\") " pod="calico-system/calico-node-2xz9q" Jan 24 03:09:21.611455 kubelet[2868]: E0124 03:09:21.609813 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:21.630395 containerd[1627]: time="2026-01-24T03:09:21.630220605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b4f6f445-dhxxq,Uid:a429c078-694a-464b-9809-de7d403b2980,Namespace:calico-system,Attempt:0,}" Jan 24 03:09:21.682186 containerd[1627]: time="2026-01-24T03:09:21.681749981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:21.682420 containerd[1627]: time="2026-01-24T03:09:21.682279325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:21.682420 containerd[1627]: time="2026-01-24T03:09:21.682377934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:21.683293 containerd[1627]: time="2026-01-24T03:09:21.682990657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:21.692901 kubelet[2868]: I0124 03:09:21.692818 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3d4cc92-f20f-4793-8073-7a8fb294fc7f-socket-dir\") pod \"csi-node-driver-7rk5p\" (UID: \"c3d4cc92-f20f-4793-8073-7a8fb294fc7f\") " pod="calico-system/csi-node-driver-7rk5p" Jan 24 03:09:21.693039 kubelet[2868]: I0124 03:09:21.692918 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c3d4cc92-f20f-4793-8073-7a8fb294fc7f-varrun\") pod \"csi-node-driver-7rk5p\" (UID: \"c3d4cc92-f20f-4793-8073-7a8fb294fc7f\") " pod="calico-system/csi-node-driver-7rk5p" Jan 24 03:09:21.693039 kubelet[2868]: I0124 03:09:21.692984 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3d4cc92-f20f-4793-8073-7a8fb294fc7f-kubelet-dir\") pod \"csi-node-driver-7rk5p\" (UID: \"c3d4cc92-f20f-4793-8073-7a8fb294fc7f\") " pod="calico-system/csi-node-driver-7rk5p" Jan 24 03:09:21.693039 kubelet[2868]: I0124 03:09:21.693014 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfw7b\" (UniqueName: \"kubernetes.io/projected/c3d4cc92-f20f-4793-8073-7a8fb294fc7f-kube-api-access-kfw7b\") pod \"csi-node-driver-7rk5p\" (UID: \"c3d4cc92-f20f-4793-8073-7a8fb294fc7f\") " pod="calico-system/csi-node-driver-7rk5p" Jan 24 03:09:21.693210 kubelet[2868]: I0124 03:09:21.693047 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3d4cc92-f20f-4793-8073-7a8fb294fc7f-registration-dir\") pod \"csi-node-driver-7rk5p\" (UID: \"c3d4cc92-f20f-4793-8073-7a8fb294fc7f\") " pod="calico-system/csi-node-driver-7rk5p" Jan 24 03:09:21.714551 kubelet[2868]: E0124 03:09:21.713897 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.714551 kubelet[2868]: W0124 03:09:21.713939 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.722396 kubelet[2868]: E0124 03:09:21.720648 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.732310 kubelet[2868]: E0124 03:09:21.730725 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.732310 kubelet[2868]: W0124 03:09:21.730758 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.732310 kubelet[2868]: E0124 03:09:21.730805 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.734619 kubelet[2868]: E0124 03:09:21.732897 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.734619 kubelet[2868]: W0124 03:09:21.732921 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.734619 kubelet[2868]: E0124 03:09:21.732942 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.734619 kubelet[2868]: E0124 03:09:21.733396 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.734619 kubelet[2868]: W0124 03:09:21.733411 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.734619 kubelet[2868]: E0124 03:09:21.733429 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.734984 kubelet[2868]: E0124 03:09:21.734811 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.734984 kubelet[2868]: W0124 03:09:21.734827 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.734984 kubelet[2868]: E0124 03:09:21.734850 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.740709 kubelet[2868]: E0124 03:09:21.735162 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.740709 kubelet[2868]: W0124 03:09:21.735179 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.740709 kubelet[2868]: E0124 03:09:21.735219 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.740709 kubelet[2868]: E0124 03:09:21.739732 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.740709 kubelet[2868]: W0124 03:09:21.739755 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.740709 kubelet[2868]: E0124 03:09:21.739776 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.741953 kubelet[2868]: E0124 03:09:21.741134 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.741953 kubelet[2868]: W0124 03:09:21.741151 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.741953 kubelet[2868]: E0124 03:09:21.741167 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.742097 kubelet[2868]: E0124 03:09:21.742019 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.742097 kubelet[2868]: W0124 03:09:21.742035 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.742097 kubelet[2868]: E0124 03:09:21.742062 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.750849 kubelet[2868]: E0124 03:09:21.750802 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.750849 kubelet[2868]: W0124 03:09:21.750840 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.751062 kubelet[2868]: E0124 03:09:21.750873 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.752362 kubelet[2868]: E0124 03:09:21.751663 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.752362 kubelet[2868]: W0124 03:09:21.751699 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.752362 kubelet[2868]: E0124 03:09:21.751718 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.758299 kubelet[2868]: E0124 03:09:21.757921 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.758299 kubelet[2868]: W0124 03:09:21.757955 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.758299 kubelet[2868]: E0124 03:09:21.758069 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.758635 kubelet[2868]: E0124 03:09:21.758410 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.758635 kubelet[2868]: W0124 03:09:21.758425 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.765102 kubelet[2868]: E0124 03:09:21.761562 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.765102 kubelet[2868]: E0124 03:09:21.761871 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.765102 kubelet[2868]: W0124 03:09:21.761887 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.765102 kubelet[2868]: E0124 03:09:21.762204 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.765102 kubelet[2868]: W0124 03:09:21.762219 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.765102 kubelet[2868]: E0124 03:09:21.762466 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.765102 kubelet[2868]: W0124 03:09:21.762480 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.765102 kubelet[2868]: E0124 03:09:21.762496 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.765102 kubelet[2868]: E0124 03:09:21.762787 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.765102 kubelet[2868]: W0124 03:09:21.762801 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.768807 kubelet[2868]: E0124 03:09:21.762816 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.768807 kubelet[2868]: E0124 03:09:21.763103 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.768807 kubelet[2868]: W0124 03:09:21.763117 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.768807 kubelet[2868]: E0124 03:09:21.763135 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.768807 kubelet[2868]: E0124 03:09:21.764240 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.768807 kubelet[2868]: E0124 03:09:21.764297 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.791639 kubelet[2868]: E0124 03:09:21.791035 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.791639 kubelet[2868]: W0124 03:09:21.791066 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.791639 kubelet[2868]: E0124 03:09:21.791102 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.794806 kubelet[2868]: E0124 03:09:21.794428 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.794806 kubelet[2868]: W0124 03:09:21.794452 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.794806 kubelet[2868]: E0124 03:09:21.794473 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.795194 kubelet[2868]: E0124 03:09:21.794825 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.795194 kubelet[2868]: W0124 03:09:21.794840 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.795194 kubelet[2868]: E0124 03:09:21.794906 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.796297 kubelet[2868]: E0124 03:09:21.795702 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.796297 kubelet[2868]: W0124 03:09:21.795746 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.796297 kubelet[2868]: E0124 03:09:21.795774 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.796945 kubelet[2868]: E0124 03:09:21.796712 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.796945 kubelet[2868]: W0124 03:09:21.796732 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.796945 kubelet[2868]: E0124 03:09:21.796759 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.798061 kubelet[2868]: E0124 03:09:21.797698 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.798061 kubelet[2868]: W0124 03:09:21.797716 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.798061 kubelet[2868]: E0124 03:09:21.797765 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.799372 kubelet[2868]: E0124 03:09:21.798564 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.799372 kubelet[2868]: W0124 03:09:21.798761 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.799372 kubelet[2868]: E0124 03:09:21.798798 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.799865 kubelet[2868]: E0124 03:09:21.799845 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.799993 kubelet[2868]: W0124 03:09:21.799971 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.801277 kubelet[2868]: E0124 03:09:21.800880 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.801277 kubelet[2868]: W0124 03:09:21.800899 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.801799 kubelet[2868]: E0124 03:09:21.801640 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.801799 kubelet[2868]: E0124 03:09:21.801667 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.801799 kubelet[2868]: W0124 03:09:21.801683 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.801799 kubelet[2868]: E0124 03:09:21.801708 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.801799 kubelet[2868]: E0124 03:09:21.801729 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.802742 kubelet[2868]: E0124 03:09:21.802470 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.802742 kubelet[2868]: W0124 03:09:21.802487 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.802742 kubelet[2868]: E0124 03:09:21.802540 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.803090 kubelet[2868]: E0124 03:09:21.802984 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.803090 kubelet[2868]: W0124 03:09:21.802999 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.803410 kubelet[2868]: E0124 03:09:21.803236 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.803839 kubelet[2868]: E0124 03:09:21.803732 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.803839 kubelet[2868]: W0124 03:09:21.803757 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.803839 kubelet[2868]: E0124 03:09:21.803784 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.804276 kubelet[2868]: E0124 03:09:21.804235 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.804276 kubelet[2868]: W0124 03:09:21.804268 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.804644 kubelet[2868]: E0124 03:09:21.804299 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.804718 kubelet[2868]: E0124 03:09:21.804663 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.804718 kubelet[2868]: W0124 03:09:21.804678 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.804718 kubelet[2868]: E0124 03:09:21.804694 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.805033 kubelet[2868]: E0124 03:09:21.805012 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.805033 kubelet[2868]: W0124 03:09:21.805033 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.805362 kubelet[2868]: E0124 03:09:21.805279 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.805362 kubelet[2868]: E0124 03:09:21.805349 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.806666 kubelet[2868]: W0124 03:09:21.805368 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.806666 kubelet[2868]: E0124 03:09:21.805406 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.806666 kubelet[2868]: E0124 03:09:21.805744 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.806666 kubelet[2868]: W0124 03:09:21.805759 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.806666 kubelet[2868]: E0124 03:09:21.806138 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.806666 kubelet[2868]: W0124 03:09:21.806155 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.806666 kubelet[2868]: E0124 03:09:21.806543 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.806666 kubelet[2868]: W0124 03:09:21.806565 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.806666 kubelet[2868]: E0124 03:09:21.806616 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.807107 kubelet[2868]: E0124 03:09:21.806953 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.807107 kubelet[2868]: W0124 03:09:21.806968 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.807107 kubelet[2868]: E0124 03:09:21.807011 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.807560 kubelet[2868]: E0124 03:09:21.807533 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.807560 kubelet[2868]: W0124 03:09:21.807556 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.807707 kubelet[2868]: E0124 03:09:21.807584 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.808280 kubelet[2868]: E0124 03:09:21.807905 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.809070 kubelet[2868]: E0124 03:09:21.809035 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.809070 kubelet[2868]: W0124 03:09:21.809058 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.809412 kubelet[2868]: E0124 03:09:21.809076 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.809412 kubelet[2868]: E0124 03:09:21.809117 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.811432 kubelet[2868]: E0124 03:09:21.811407 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.811541 kubelet[2868]: W0124 03:09:21.811435 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.811541 kubelet[2868]: E0124 03:09:21.811462 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.812944 kubelet[2868]: E0124 03:09:21.812912 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.812944 kubelet[2868]: W0124 03:09:21.812938 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.813062 kubelet[2868]: E0124 03:09:21.812957 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.813632 kubelet[2868]: E0124 03:09:21.813280 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.813632 kubelet[2868]: W0124 03:09:21.813302 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.813632 kubelet[2868]: E0124 03:09:21.813362 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.825724 containerd[1627]: time="2026-01-24T03:09:21.825311617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2xz9q,Uid:5ea4b2c9-bfc3-43c5-8167-05971c627092,Namespace:calico-system,Attempt:0,}" Jan 24 03:09:21.833733 kubelet[2868]: E0124 03:09:21.833589 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:21.833733 kubelet[2868]: W0124 03:09:21.833657 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:21.833733 kubelet[2868]: E0124 03:09:21.833682 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:21.902517 containerd[1627]: time="2026-01-24T03:09:21.902165297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:21.902517 containerd[1627]: time="2026-01-24T03:09:21.902241066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:21.902517 containerd[1627]: time="2026-01-24T03:09:21.902258374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:21.902517 containerd[1627]: time="2026-01-24T03:09:21.902402053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:21.927187 containerd[1627]: time="2026-01-24T03:09:21.927110684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b4f6f445-dhxxq,Uid:a429c078-694a-464b-9809-de7d403b2980,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6f51887658e6240577c3579d5e2a90a4329350384bb0d7ab28d379e95debd80\"" Jan 24 03:09:21.966715 containerd[1627]: time="2026-01-24T03:09:21.965982103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 03:09:21.991271 containerd[1627]: time="2026-01-24T03:09:21.990775126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2xz9q,Uid:5ea4b2c9-bfc3-43c5-8167-05971c627092,Namespace:calico-system,Attempt:0,} returns sandbox id \"144531c1fb56e7e52859baea64dda8ab6d4ffc7d87c2c0c3852c20c828821c1d\"" Jan 24 03:09:23.123163 kubelet[2868]: E0124 03:09:23.123079 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:23.663214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077777596.mount: Deactivated successfully. Jan 24 03:09:25.147073 kubelet[2868]: E0124 03:09:25.146976 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:25.928632 containerd[1627]: time="2026-01-24T03:09:25.927572839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:25.929551 containerd[1627]: time="2026-01-24T03:09:25.928685136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 03:09:25.930218 containerd[1627]: time="2026-01-24T03:09:25.929818993Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:25.932949 containerd[1627]: time="2026-01-24T03:09:25.932886723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:25.934360 containerd[1627]: time="2026-01-24T03:09:25.933857856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.967780196s" Jan 24 03:09:25.934360 containerd[1627]: time="2026-01-24T03:09:25.933904644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 03:09:25.938936 containerd[1627]: time="2026-01-24T03:09:25.938574605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 03:09:25.967950 containerd[1627]: time="2026-01-24T03:09:25.967886911Z" level=info msg="CreateContainer within sandbox \"f6f51887658e6240577c3579d5e2a90a4329350384bb0d7ab28d379e95debd80\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 03:09:25.988273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2148869579.mount: Deactivated successfully. Jan 24 03:09:25.991218 containerd[1627]: time="2026-01-24T03:09:25.991173682Z" level=info msg="CreateContainer within sandbox \"f6f51887658e6240577c3579d5e2a90a4329350384bb0d7ab28d379e95debd80\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bd6ee4a44c50e6f880147f053e620c0917f41ed472943f0934137ca9d8d7d9e0\"" Jan 24 03:09:25.993331 containerd[1627]: time="2026-01-24T03:09:25.993296907Z" level=info msg="StartContainer for \"bd6ee4a44c50e6f880147f053e620c0917f41ed472943f0934137ca9d8d7d9e0\"" Jan 24 03:09:26.154554 containerd[1627]: time="2026-01-24T03:09:26.153402225Z" level=info msg="StartContainer for \"bd6ee4a44c50e6f880147f053e620c0917f41ed472943f0934137ca9d8d7d9e0\" returns successfully" Jan 24 03:09:26.407639 kubelet[2868]: E0124 03:09:26.406851 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.407639 kubelet[2868]: W0124 03:09:26.406901 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.407639 kubelet[2868]: E0124 03:09:26.406940 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.411096 kubelet[2868]: E0124 03:09:26.410951 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.411096 kubelet[2868]: W0124 03:09:26.410977 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.411096 kubelet[2868]: E0124 03:09:26.410996 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.412434 kubelet[2868]: E0124 03:09:26.412332 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.412434 kubelet[2868]: W0124 03:09:26.412359 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.412434 kubelet[2868]: E0124 03:09:26.412377 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.414048 kubelet[2868]: E0124 03:09:26.413861 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.414048 kubelet[2868]: W0124 03:09:26.413898 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.414048 kubelet[2868]: E0124 03:09:26.413917 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.418203 kubelet[2868]: E0124 03:09:26.418149 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.418203 kubelet[2868]: W0124 03:09:26.418175 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.418203 kubelet[2868]: E0124 03:09:26.418197 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.422055 kubelet[2868]: E0124 03:09:26.420187 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.422055 kubelet[2868]: W0124 03:09:26.420250 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.422055 kubelet[2868]: E0124 03:09:26.420272 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.422797 kubelet[2868]: E0124 03:09:26.422411 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.422797 kubelet[2868]: W0124 03:09:26.422431 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.422797 kubelet[2868]: E0124 03:09:26.422449 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.423644 kubelet[2868]: E0124 03:09:26.423477 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.423644 kubelet[2868]: W0124 03:09:26.423511 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.423644 kubelet[2868]: E0124 03:09:26.423530 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.424991 kubelet[2868]: E0124 03:09:26.424816 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.424991 kubelet[2868]: W0124 03:09:26.424836 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.424991 kubelet[2868]: E0124 03:09:26.424854 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.425268 kubelet[2868]: E0124 03:09:26.425250 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.425359 kubelet[2868]: W0124 03:09:26.425339 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.425548 kubelet[2868]: E0124 03:09:26.425463 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.426244 kubelet[2868]: E0124 03:09:26.426131 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.426244 kubelet[2868]: W0124 03:09:26.426150 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.426244 kubelet[2868]: E0124 03:09:26.426168 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.426866 kubelet[2868]: E0124 03:09:26.426747 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.426866 kubelet[2868]: W0124 03:09:26.426765 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.426866 kubelet[2868]: E0124 03:09:26.426782 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.427480 kubelet[2868]: E0124 03:09:26.427298 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.427480 kubelet[2868]: W0124 03:09:26.427316 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.427480 kubelet[2868]: E0124 03:09:26.427332 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.427932 kubelet[2868]: E0124 03:09:26.427804 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.427932 kubelet[2868]: W0124 03:09:26.427822 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.427932 kubelet[2868]: E0124 03:09:26.427855 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.428744 kubelet[2868]: E0124 03:09:26.428627 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.428744 kubelet[2868]: W0124 03:09:26.428646 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.428744 kubelet[2868]: E0124 03:09:26.428663 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.452303 kubelet[2868]: E0124 03:09:26.451870 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.452303 kubelet[2868]: W0124 03:09:26.451912 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.452303 kubelet[2868]: E0124 03:09:26.451941 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.455440 kubelet[2868]: E0124 03:09:26.455041 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.455440 kubelet[2868]: W0124 03:09:26.455064 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.455440 kubelet[2868]: E0124 03:09:26.455082 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.457375 kubelet[2868]: E0124 03:09:26.457354 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.457588 kubelet[2868]: W0124 03:09:26.457515 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.460468 kubelet[2868]: E0124 03:09:26.460213 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.460927 kubelet[2868]: E0124 03:09:26.460907 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.461150 kubelet[2868]: W0124 03:09:26.461060 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.461150 kubelet[2868]: E0124 03:09:26.461092 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.461930 kubelet[2868]: E0124 03:09:26.461912 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.462215 kubelet[2868]: W0124 03:09:26.462051 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.462215 kubelet[2868]: E0124 03:09:26.462084 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.465309 kubelet[2868]: E0124 03:09:26.465199 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.465309 kubelet[2868]: W0124 03:09:26.465220 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.465697 kubelet[2868]: E0124 03:09:26.465479 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.468341 kubelet[2868]: E0124 03:09:26.468321 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.468551 kubelet[2868]: W0124 03:09:26.468450 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.468744 kubelet[2868]: E0124 03:09:26.468656 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.469421 kubelet[2868]: E0124 03:09:26.469293 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.469421 kubelet[2868]: W0124 03:09:26.469312 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.469656 kubelet[2868]: E0124 03:09:26.469589 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.473694 kubelet[2868]: E0124 03:09:26.473562 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.473694 kubelet[2868]: W0124 03:09:26.473587 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.474449 kubelet[2868]: E0124 03:09:26.474033 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.477311 kubelet[2868]: E0124 03:09:26.477145 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.477311 kubelet[2868]: W0124 03:09:26.477172 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.477311 kubelet[2868]: E0124 03:09:26.477261 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.482095 kubelet[2868]: E0124 03:09:26.480354 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.482095 kubelet[2868]: W0124 03:09:26.480374 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.483860 kubelet[2868]: E0124 03:09:26.483836 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.484105 kubelet[2868]: E0124 03:09:26.484050 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.484251 kubelet[2868]: W0124 03:09:26.484209 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.485766 kubelet[2868]: E0124 03:09:26.485633 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.485962 kubelet[2868]: E0124 03:09:26.485942 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.486077 kubelet[2868]: W0124 03:09:26.486045 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.486201 kubelet[2868]: E0124 03:09:26.486179 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.487233 kubelet[2868]: E0124 03:09:26.487213 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.487368 kubelet[2868]: W0124 03:09:26.487334 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.487748 kubelet[2868]: E0124 03:09:26.487725 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.490349 kubelet[2868]: E0124 03:09:26.489799 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.490349 kubelet[2868]: W0124 03:09:26.489819 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.490349 kubelet[2868]: E0124 03:09:26.489965 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.491247 kubelet[2868]: E0124 03:09:26.490828 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.491247 kubelet[2868]: W0124 03:09:26.490848 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.491247 kubelet[2868]: E0124 03:09:26.490871 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.492372 kubelet[2868]: E0124 03:09:26.492299 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.492372 kubelet[2868]: W0124 03:09:26.492346 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.492714 kubelet[2868]: E0124 03:09:26.492687 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:26.493412 kubelet[2868]: E0124 03:09:26.493392 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:26.493572 kubelet[2868]: W0124 03:09:26.493550 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:26.493826 kubelet[2868]: E0124 03:09:26.493736 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.123214 kubelet[2868]: E0124 03:09:27.122979 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:27.352221 kubelet[2868]: I0124 03:09:27.351654 2868 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 03:09:27.436934 kubelet[2868]: E0124 03:09:27.436879 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.437774 kubelet[2868]: W0124 03:09:27.437705 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.437847 kubelet[2868]: E0124 03:09:27.437792 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.438339 kubelet[2868]: E0124 03:09:27.438315 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.438339 kubelet[2868]: W0124 03:09:27.438337 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.438464 kubelet[2868]: E0124 03:09:27.438355 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.438753 kubelet[2868]: E0124 03:09:27.438718 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.438753 kubelet[2868]: W0124 03:09:27.438752 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.438867 kubelet[2868]: E0124 03:09:27.438789 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.439205 kubelet[2868]: E0124 03:09:27.439178 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.439205 kubelet[2868]: W0124 03:09:27.439203 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.439330 kubelet[2868]: E0124 03:09:27.439221 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.439716 kubelet[2868]: E0124 03:09:27.439693 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.439716 kubelet[2868]: W0124 03:09:27.439715 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.439861 kubelet[2868]: E0124 03:09:27.439732 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.440104 kubelet[2868]: E0124 03:09:27.440082 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.440104 kubelet[2868]: W0124 03:09:27.440103 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.440225 kubelet[2868]: E0124 03:09:27.440139 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.440530 kubelet[2868]: E0124 03:09:27.440508 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.440530 kubelet[2868]: W0124 03:09:27.440528 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.440691 kubelet[2868]: E0124 03:09:27.440545 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.440905 kubelet[2868]: E0124 03:09:27.440884 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.441419 kubelet[2868]: W0124 03:09:27.440904 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.441419 kubelet[2868]: E0124 03:09:27.440961 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.441419 kubelet[2868]: E0124 03:09:27.441322 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.441419 kubelet[2868]: W0124 03:09:27.441336 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.441419 kubelet[2868]: E0124 03:09:27.441351 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.441753 kubelet[2868]: E0124 03:09:27.441678 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.441753 kubelet[2868]: W0124 03:09:27.441712 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.441753 kubelet[2868]: E0124 03:09:27.441729 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.442073 kubelet[2868]: E0124 03:09:27.442052 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.442073 kubelet[2868]: W0124 03:09:27.442074 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.442199 kubelet[2868]: E0124 03:09:27.442090 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.442513 kubelet[2868]: E0124 03:09:27.442468 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.442513 kubelet[2868]: W0124 03:09:27.442498 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.442513 kubelet[2868]: E0124 03:09:27.442514 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.442852 kubelet[2868]: E0124 03:09:27.442831 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.442945 kubelet[2868]: W0124 03:09:27.442870 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.442945 kubelet[2868]: E0124 03:09:27.442888 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.443186 kubelet[2868]: E0124 03:09:27.443158 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.443186 kubelet[2868]: W0124 03:09:27.443185 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.443309 kubelet[2868]: E0124 03:09:27.443201 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.443492 kubelet[2868]: E0124 03:09:27.443459 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.443492 kubelet[2868]: W0124 03:09:27.443489 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.443688 kubelet[2868]: E0124 03:09:27.443507 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.479845 kubelet[2868]: E0124 03:09:27.479667 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.479845 kubelet[2868]: W0124 03:09:27.479713 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.479845 kubelet[2868]: E0124 03:09:27.479753 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.480223 kubelet[2868]: E0124 03:09:27.480056 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.480223 kubelet[2868]: W0124 03:09:27.480072 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.480223 kubelet[2868]: E0124 03:09:27.480088 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.480378 kubelet[2868]: E0124 03:09:27.480361 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.480378 kubelet[2868]: W0124 03:09:27.480376 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.480557 kubelet[2868]: E0124 03:09:27.480392 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.480955 kubelet[2868]: E0124 03:09:27.480714 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.480955 kubelet[2868]: W0124 03:09:27.480736 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.480955 kubelet[2868]: E0124 03:09:27.480758 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.481532 kubelet[2868]: E0124 03:09:27.481186 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.481532 kubelet[2868]: W0124 03:09:27.481222 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.481532 kubelet[2868]: E0124 03:09:27.481264 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.481532 kubelet[2868]: E0124 03:09:27.481638 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.481532 kubelet[2868]: W0124 03:09:27.481653 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.481532 kubelet[2868]: E0124 03:09:27.481696 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.482574 kubelet[2868]: E0124 03:09:27.482304 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.482574 kubelet[2868]: W0124 03:09:27.482323 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.482574 kubelet[2868]: E0124 03:09:27.482362 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.483685 kubelet[2868]: E0124 03:09:27.483350 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.483685 kubelet[2868]: W0124 03:09:27.483371 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.483685 kubelet[2868]: E0124 03:09:27.483418 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.484633 kubelet[2868]: E0124 03:09:27.484610 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.484810 kubelet[2868]: W0124 03:09:27.484688 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.485196 kubelet[2868]: E0124 03:09:27.485176 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.485425 kubelet[2868]: W0124 03:09:27.485261 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.486786 kubelet[2868]: E0124 03:09:27.486539 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.486786 kubelet[2868]: W0124 03:09:27.486564 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.486786 kubelet[2868]: E0124 03:09:27.486582 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.486786 kubelet[2868]: E0124 03:09:27.486664 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.488173 kubelet[2868]: E0124 03:09:27.487281 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.488173 kubelet[2868]: W0124 03:09:27.487300 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.488173 kubelet[2868]: E0124 03:09:27.487328 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.488173 kubelet[2868]: E0124 03:09:27.487790 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.488173 kubelet[2868]: W0124 03:09:27.487812 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.488173 kubelet[2868]: E0124 03:09:27.487829 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.488173 kubelet[2868]: E0124 03:09:27.488079 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.488631 kubelet[2868]: E0124 03:09:27.488521 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.488631 kubelet[2868]: W0124 03:09:27.488574 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.488741 kubelet[2868]: E0124 03:09:27.488634 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.489044 kubelet[2868]: E0124 03:09:27.488996 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.489212 kubelet[2868]: W0124 03:09:27.489048 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.489212 kubelet[2868]: E0124 03:09:27.489079 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.489731 kubelet[2868]: E0124 03:09:27.489553 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.489731 kubelet[2868]: W0124 03:09:27.489570 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.489731 kubelet[2868]: E0124 03:09:27.489724 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.490328 kubelet[2868]: E0124 03:09:27.490119 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.490328 kubelet[2868]: W0124 03:09:27.490256 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.490328 kubelet[2868]: E0124 03:09:27.490275 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:27.491176 kubelet[2868]: E0124 03:09:27.491138 2868 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 03:09:27.491176 kubelet[2868]: W0124 03:09:27.491160 2868 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 03:09:27.491176 kubelet[2868]: E0124 03:09:27.491177 2868 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 03:09:28.032994 containerd[1627]: time="2026-01-24T03:09:28.031783625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:28.033892 containerd[1627]: time="2026-01-24T03:09:28.033837160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 03:09:28.035016 containerd[1627]: time="2026-01-24T03:09:28.034983106Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:28.056997 containerd[1627]: time="2026-01-24T03:09:28.056914440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:28.058621 containerd[1627]: time="2026-01-24T03:09:28.058540945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.119855036s" Jan 24 03:09:28.058733 containerd[1627]: time="2026-01-24T03:09:28.058632581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 03:09:28.064662 containerd[1627]: time="2026-01-24T03:09:28.064610905Z" level=info msg="CreateContainer within sandbox \"144531c1fb56e7e52859baea64dda8ab6d4ffc7d87c2c0c3852c20c828821c1d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 03:09:28.084408 containerd[1627]: time="2026-01-24T03:09:28.084343511Z" level=info msg="CreateContainer within sandbox \"144531c1fb56e7e52859baea64dda8ab6d4ffc7d87c2c0c3852c20c828821c1d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cded28ae3b72c3485226af9de673979dd296963f7f0b17ac1ccbd7084cd72a31\"" Jan 24 03:09:28.087043 containerd[1627]: time="2026-01-24T03:09:28.086912122Z" level=info msg="StartContainer for \"cded28ae3b72c3485226af9de673979dd296963f7f0b17ac1ccbd7084cd72a31\"" Jan 24 03:09:28.202921 containerd[1627]: time="2026-01-24T03:09:28.202818957Z" level=info msg="StartContainer for \"cded28ae3b72c3485226af9de673979dd296963f7f0b17ac1ccbd7084cd72a31\" returns successfully" Jan 24 03:09:28.276251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cded28ae3b72c3485226af9de673979dd296963f7f0b17ac1ccbd7084cd72a31-rootfs.mount: Deactivated successfully. Jan 24 03:09:28.300191 containerd[1627]: time="2026-01-24T03:09:28.279709461Z" level=info msg="shim disconnected" id=cded28ae3b72c3485226af9de673979dd296963f7f0b17ac1ccbd7084cd72a31 namespace=k8s.io Jan 24 03:09:28.300584 containerd[1627]: time="2026-01-24T03:09:28.300386382Z" level=warning msg="cleaning up after shim disconnected" id=cded28ae3b72c3485226af9de673979dd296963f7f0b17ac1ccbd7084cd72a31 namespace=k8s.io Jan 24 03:09:28.300584 containerd[1627]: time="2026-01-24T03:09:28.300417068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:09:28.361886 containerd[1627]: time="2026-01-24T03:09:28.361057150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 03:09:28.396099 kubelet[2868]: I0124 03:09:28.395902 2868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b4f6f445-dhxxq" podStartSLOduration=3.419520427 podStartE2EDuration="7.395879589s" podCreationTimestamp="2026-01-24 03:09:21 +0000 UTC" firstStartedPulling="2026-01-24 03:09:21.959676577 +0000 UTC m=+26.058923766" lastFinishedPulling="2026-01-24 03:09:25.936035732 +0000 UTC m=+30.035282928" observedRunningTime="2026-01-24 03:09:26.444983424 +0000 UTC m=+30.544230641" watchObservedRunningTime="2026-01-24 03:09:28.395879589 +0000 UTC m=+32.495126916" Jan 24 03:09:29.122868 kubelet[2868]: E0124 03:09:29.122737 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:31.123620 kubelet[2868]: E0124 03:09:31.122111 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:33.124701 kubelet[2868]: E0124 03:09:33.123140 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:33.613645 containerd[1627]: time="2026-01-24T03:09:33.612924812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:33.615160 containerd[1627]: time="2026-01-24T03:09:33.614332282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 03:09:33.616635 containerd[1627]: time="2026-01-24T03:09:33.616035781Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:33.621663 containerd[1627]: time="2026-01-24T03:09:33.620357791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:33.622071 containerd[1627]: time="2026-01-24T03:09:33.621845564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.26072508s" Jan 24 03:09:33.622071 containerd[1627]: time="2026-01-24T03:09:33.621948069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 03:09:33.629469 containerd[1627]: time="2026-01-24T03:09:33.627927801Z" level=info msg="CreateContainer within sandbox \"144531c1fb56e7e52859baea64dda8ab6d4ffc7d87c2c0c3852c20c828821c1d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 03:09:33.667955 containerd[1627]: time="2026-01-24T03:09:33.667792563Z" level=info msg="CreateContainer within sandbox \"144531c1fb56e7e52859baea64dda8ab6d4ffc7d87c2c0c3852c20c828821c1d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4bf812aafcf801049cf6f5c8cb16380f0af9e927cddf9a4b22d330a5c8062f5d\"" Jan 24 03:09:33.671981 containerd[1627]: time="2026-01-24T03:09:33.671936143Z" level=info msg="StartContainer for \"4bf812aafcf801049cf6f5c8cb16380f0af9e927cddf9a4b22d330a5c8062f5d\"" Jan 24 03:09:33.911289 containerd[1627]: time="2026-01-24T03:09:33.911202684Z" level=info msg="StartContainer for \"4bf812aafcf801049cf6f5c8cb16380f0af9e927cddf9a4b22d330a5c8062f5d\" returns successfully" Jan 24 03:09:35.114100 kubelet[2868]: I0124 03:09:35.111095 2868 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 03:09:35.122834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bf812aafcf801049cf6f5c8cb16380f0af9e927cddf9a4b22d330a5c8062f5d-rootfs.mount: Deactivated successfully. Jan 24 03:09:35.131839 containerd[1627]: time="2026-01-24T03:09:35.129011582Z" level=info msg="shim disconnected" id=4bf812aafcf801049cf6f5c8cb16380f0af9e927cddf9a4b22d330a5c8062f5d namespace=k8s.io Jan 24 03:09:35.131839 containerd[1627]: time="2026-01-24T03:09:35.129101657Z" level=warning msg="cleaning up after shim disconnected" id=4bf812aafcf801049cf6f5c8cb16380f0af9e927cddf9a4b22d330a5c8062f5d namespace=k8s.io Jan 24 03:09:35.131839 containerd[1627]: time="2026-01-24T03:09:35.129117553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:09:35.142639 containerd[1627]: time="2026-01-24T03:09:35.141707183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rk5p,Uid:c3d4cc92-f20f-4793-8073-7a8fb294fc7f,Namespace:calico-system,Attempt:0,}" Jan 24 03:09:35.248639 kubelet[2868]: I0124 03:09:35.245071 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/76ab4499-021b-4baa-941b-8b5ea5143e46-calico-apiserver-certs\") pod \"calico-apiserver-569dd98ffb-zpcp9\" (UID: \"76ab4499-021b-4baa-941b-8b5ea5143e46\") " pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" Jan 24 03:09:35.248639 kubelet[2868]: I0124 03:09:35.245155 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kqj\" (UniqueName: \"kubernetes.io/projected/76ab4499-021b-4baa-941b-8b5ea5143e46-kube-api-access-d5kqj\") pod \"calico-apiserver-569dd98ffb-zpcp9\" (UID: \"76ab4499-021b-4baa-941b-8b5ea5143e46\") " pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" Jan 24 03:09:35.248639 kubelet[2868]: I0124 03:09:35.245192 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxfv2\" (UniqueName: \"kubernetes.io/projected/00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c-kube-api-access-xxfv2\") pod \"calico-apiserver-569dd98ffb-4br8n\" (UID: \"00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c\") " pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" Jan 24 03:09:35.248639 kubelet[2868]: I0124 03:09:35.245248 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c-calico-apiserver-certs\") pod \"calico-apiserver-569dd98ffb-4br8n\" (UID: \"00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c\") " pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" Jan 24 03:09:35.348119 kubelet[2868]: I0124 03:09:35.347826 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94ce2e5d-4660-46c3-961b-bbe64cee7f9e-config-volume\") pod \"coredns-668d6bf9bc-dpcwb\" (UID: \"94ce2e5d-4660-46c3-961b-bbe64cee7f9e\") " pod="kube-system/coredns-668d6bf9bc-dpcwb" Jan 24 03:09:35.351471 kubelet[2868]: I0124 03:09:35.348640 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b46b6c51-14b1-4c45-8faa-d27677477dc3-config\") pod \"goldmane-666569f655-54qqp\" (UID: \"b46b6c51-14b1-4c45-8faa-d27677477dc3\") " pod="calico-system/goldmane-666569f655-54qqp" Jan 24 03:09:35.351471 kubelet[2868]: I0124 03:09:35.348686 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b46b6c51-14b1-4c45-8faa-d27677477dc3-goldmane-ca-bundle\") pod \"goldmane-666569f655-54qqp\" (UID: \"b46b6c51-14b1-4c45-8faa-d27677477dc3\") " pod="calico-system/goldmane-666569f655-54qqp" Jan 24 03:09:35.351471 kubelet[2868]: I0124 03:09:35.348735 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3cd30f3-05f1-4f59-9983-53b558455fdb-whisker-backend-key-pair\") pod \"whisker-5ffff4665c-wr7xc\" (UID: \"e3cd30f3-05f1-4f59-9983-53b558455fdb\") " pod="calico-system/whisker-5ffff4665c-wr7xc" Jan 24 03:09:35.351471 kubelet[2868]: I0124 03:09:35.348771 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b46b6c51-14b1-4c45-8faa-d27677477dc3-goldmane-key-pair\") pod \"goldmane-666569f655-54qqp\" (UID: \"b46b6c51-14b1-4c45-8faa-d27677477dc3\") " pod="calico-system/goldmane-666569f655-54qqp" Jan 24 03:09:35.351471 kubelet[2868]: I0124 03:09:35.348806 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b869920c-5e36-401e-9670-1efb848b70fd-config-volume\") pod \"coredns-668d6bf9bc-jvx2v\" (UID: \"b869920c-5e36-401e-9670-1efb848b70fd\") " pod="kube-system/coredns-668d6bf9bc-jvx2v" Jan 24 03:09:35.351844 kubelet[2868]: I0124 03:09:35.348872 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b9a31a8-5cc7-4ee4-9145-620e764b84d5-tigera-ca-bundle\") pod \"calico-kube-controllers-64cd87fbdf-87r2w\" (UID: \"7b9a31a8-5cc7-4ee4-9145-620e764b84d5\") " pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" Jan 24 03:09:35.351844 kubelet[2868]: I0124 03:09:35.348915 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6vz\" (UniqueName: \"kubernetes.io/projected/94ce2e5d-4660-46c3-961b-bbe64cee7f9e-kube-api-access-xv6vz\") pod \"coredns-668d6bf9bc-dpcwb\" (UID: \"94ce2e5d-4660-46c3-961b-bbe64cee7f9e\") " pod="kube-system/coredns-668d6bf9bc-dpcwb" Jan 24 03:09:35.351844 kubelet[2868]: I0124 03:09:35.348943 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24z2\" (UniqueName: \"kubernetes.io/projected/b869920c-5e36-401e-9670-1efb848b70fd-kube-api-access-x24z2\") pod \"coredns-668d6bf9bc-jvx2v\" (UID: \"b869920c-5e36-401e-9670-1efb848b70fd\") " pod="kube-system/coredns-668d6bf9bc-jvx2v" Jan 24 03:09:35.351844 kubelet[2868]: I0124 03:09:35.348973 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5cht\" (UniqueName: \"kubernetes.io/projected/7b9a31a8-5cc7-4ee4-9145-620e764b84d5-kube-api-access-l5cht\") pod \"calico-kube-controllers-64cd87fbdf-87r2w\" (UID: \"7b9a31a8-5cc7-4ee4-9145-620e764b84d5\") " pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" Jan 24 03:09:35.351844 kubelet[2868]: I0124 03:09:35.349013 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q89p7\" (UniqueName: \"kubernetes.io/projected/e3cd30f3-05f1-4f59-9983-53b558455fdb-kube-api-access-q89p7\") pod \"whisker-5ffff4665c-wr7xc\" (UID: \"e3cd30f3-05f1-4f59-9983-53b558455fdb\") " pod="calico-system/whisker-5ffff4665c-wr7xc" Jan 24 03:09:35.352083 kubelet[2868]: I0124 03:09:35.349046 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3cd30f3-05f1-4f59-9983-53b558455fdb-whisker-ca-bundle\") pod \"whisker-5ffff4665c-wr7xc\" (UID: \"e3cd30f3-05f1-4f59-9983-53b558455fdb\") " pod="calico-system/whisker-5ffff4665c-wr7xc" Jan 24 03:09:35.352083 kubelet[2868]: I0124 03:09:35.349076 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2544f\" (UniqueName: \"kubernetes.io/projected/b46b6c51-14b1-4c45-8faa-d27677477dc3-kube-api-access-2544f\") pod \"goldmane-666569f655-54qqp\" (UID: \"b46b6c51-14b1-4c45-8faa-d27677477dc3\") " pod="calico-system/goldmane-666569f655-54qqp" Jan 24 03:09:35.417788 containerd[1627]: time="2026-01-24T03:09:35.417715783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 03:09:35.538082 containerd[1627]: time="2026-01-24T03:09:35.538030075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569dd98ffb-zpcp9,Uid:76ab4499-021b-4baa-941b-8b5ea5143e46,Namespace:calico-apiserver,Attempt:0,}" Jan 24 03:09:35.550084 containerd[1627]: time="2026-01-24T03:09:35.549706497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ffff4665c-wr7xc,Uid:e3cd30f3-05f1-4f59-9983-53b558455fdb,Namespace:calico-system,Attempt:0,}" Jan 24 03:09:35.559632 containerd[1627]: time="2026-01-24T03:09:35.559560345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-54qqp,Uid:b46b6c51-14b1-4c45-8faa-d27677477dc3,Namespace:calico-system,Attempt:0,}" Jan 24 03:09:35.563852 containerd[1627]: time="2026-01-24T03:09:35.563776848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569dd98ffb-4br8n,Uid:00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c,Namespace:calico-apiserver,Attempt:0,}" Jan 24 03:09:35.568144 containerd[1627]: time="2026-01-24T03:09:35.568105252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dpcwb,Uid:94ce2e5d-4660-46c3-961b-bbe64cee7f9e,Namespace:kube-system,Attempt:0,}" Jan 24 03:09:35.571002 containerd[1627]: time="2026-01-24T03:09:35.570586783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jvx2v,Uid:b869920c-5e36-401e-9670-1efb848b70fd,Namespace:kube-system,Attempt:0,}" Jan 24 03:09:35.577931 containerd[1627]: time="2026-01-24T03:09:35.576344513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cd87fbdf-87r2w,Uid:7b9a31a8-5cc7-4ee4-9145-620e764b84d5,Namespace:calico-system,Attempt:0,}" Jan 24 03:09:35.665686 containerd[1627]: time="2026-01-24T03:09:35.665610877Z" level=error msg="Failed to destroy network for sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.671786 containerd[1627]: time="2026-01-24T03:09:35.671449888Z" level=error msg="encountered an error cleaning up failed sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.685296 containerd[1627]: time="2026-01-24T03:09:35.685043837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rk5p,Uid:c3d4cc92-f20f-4793-8073-7a8fb294fc7f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.691783 kubelet[2868]: E0124 03:09:35.691012 2868 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.691783 kubelet[2868]: E0124 03:09:35.691161 2868 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rk5p" Jan 24 03:09:35.691783 kubelet[2868]: E0124 03:09:35.691221 2868 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rk5p" Jan 24 03:09:35.693708 kubelet[2868]: E0124 03:09:35.691312 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7rk5p_calico-system(c3d4cc92-f20f-4793-8073-7a8fb294fc7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7rk5p_calico-system(c3d4cc92-f20f-4793-8073-7a8fb294fc7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:35.905776 containerd[1627]: time="2026-01-24T03:09:35.905714934Z" level=error msg="Failed to destroy network for sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.906545 containerd[1627]: time="2026-01-24T03:09:35.906491389Z" level=error msg="encountered an error cleaning up failed sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.908290 containerd[1627]: time="2026-01-24T03:09:35.908247957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569dd98ffb-zpcp9,Uid:76ab4499-021b-4baa-941b-8b5ea5143e46,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.908948 kubelet[2868]: E0124 03:09:35.908898 2868 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.909134 kubelet[2868]: E0124 03:09:35.909103 2868 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" Jan 24 03:09:35.909357 kubelet[2868]: E0124 03:09:35.909326 2868 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" Jan 24 03:09:35.909753 kubelet[2868]: E0124 03:09:35.909712 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-569dd98ffb-zpcp9_calico-apiserver(76ab4499-021b-4baa-941b-8b5ea5143e46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-569dd98ffb-zpcp9_calico-apiserver(76ab4499-021b-4baa-941b-8b5ea5143e46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:09:35.982961 containerd[1627]: time="2026-01-24T03:09:35.982799645Z" level=error msg="Failed to destroy network for sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.984238 containerd[1627]: time="2026-01-24T03:09:35.984071266Z" level=error msg="encountered an error cleaning up failed sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.984238 containerd[1627]: time="2026-01-24T03:09:35.984140320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ffff4665c-wr7xc,Uid:e3cd30f3-05f1-4f59-9983-53b558455fdb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.986746 kubelet[2868]: E0124 03:09:35.984419 2868 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:35.986746 kubelet[2868]: E0124 03:09:35.984516 2868 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ffff4665c-wr7xc" Jan 24 03:09:35.986746 kubelet[2868]: E0124 03:09:35.984567 2868 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ffff4665c-wr7xc" Jan 24 03:09:35.987808 kubelet[2868]: E0124 03:09:35.984691 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5ffff4665c-wr7xc_calico-system(e3cd30f3-05f1-4f59-9983-53b558455fdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5ffff4665c-wr7xc_calico-system(e3cd30f3-05f1-4f59-9983-53b558455fdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ffff4665c-wr7xc" podUID="e3cd30f3-05f1-4f59-9983-53b558455fdb" Jan 24 03:09:36.002751 containerd[1627]: time="2026-01-24T03:09:36.002688760Z" level=error msg="Failed to destroy network for sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.003283 containerd[1627]: time="2026-01-24T03:09:36.003243527Z" level=error msg="encountered an error cleaning up failed sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.003369 containerd[1627]: time="2026-01-24T03:09:36.003329022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dpcwb,Uid:94ce2e5d-4660-46c3-961b-bbe64cee7f9e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.003741 kubelet[2868]: E0124 03:09:36.003672 2868 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.003863 kubelet[2868]: E0124 03:09:36.003760 2868 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dpcwb" Jan 24 03:09:36.003863 kubelet[2868]: E0124 03:09:36.003795 2868 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dpcwb" Jan 24 03:09:36.004388 kubelet[2868]: E0124 03:09:36.003857 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dpcwb_kube-system(94ce2e5d-4660-46c3-961b-bbe64cee7f9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dpcwb_kube-system(94ce2e5d-4660-46c3-961b-bbe64cee7f9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dpcwb" podUID="94ce2e5d-4660-46c3-961b-bbe64cee7f9e" Jan 24 03:09:36.017358 containerd[1627]: time="2026-01-24T03:09:36.016628721Z" level=error msg="Failed to destroy network for sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.018214 containerd[1627]: time="2026-01-24T03:09:36.018171754Z" level=error msg="encountered an error cleaning up failed sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.018387 containerd[1627]: time="2026-01-24T03:09:36.018345702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jvx2v,Uid:b869920c-5e36-401e-9670-1efb848b70fd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.019086 kubelet[2868]: E0124 03:09:36.019017 2868 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.019191 kubelet[2868]: E0124 03:09:36.019101 2868 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jvx2v" Jan 24 03:09:36.019191 kubelet[2868]: E0124 03:09:36.019147 2868 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jvx2v" Jan 24 03:09:36.019367 kubelet[2868]: E0124 03:09:36.019208 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jvx2v_kube-system(b869920c-5e36-401e-9670-1efb848b70fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jvx2v_kube-system(b869920c-5e36-401e-9670-1efb848b70fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jvx2v" podUID="b869920c-5e36-401e-9670-1efb848b70fd" Jan 24 03:09:36.050641 containerd[1627]: time="2026-01-24T03:09:36.049690801Z" level=error msg="Failed to destroy network for sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.051225 containerd[1627]: time="2026-01-24T03:09:36.051186260Z" level=error msg="encountered an error cleaning up failed sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.051449 containerd[1627]: time="2026-01-24T03:09:36.051399389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-54qqp,Uid:b46b6c51-14b1-4c45-8faa-d27677477dc3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.053661 kubelet[2868]: E0124 03:09:36.052908 2868 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.053661 kubelet[2868]: E0124 03:09:36.052983 2868 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-54qqp" Jan 24 03:09:36.053661 kubelet[2868]: E0124 03:09:36.053019 2868 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-54qqp" Jan 24 03:09:36.053956 kubelet[2868]: E0124 03:09:36.053075 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-54qqp_calico-system(b46b6c51-14b1-4c45-8faa-d27677477dc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-54qqp_calico-system(b46b6c51-14b1-4c45-8faa-d27677477dc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:09:36.059149 containerd[1627]: time="2026-01-24T03:09:36.058917144Z" level=error msg="Failed to destroy network for sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.062814 containerd[1627]: time="2026-01-24T03:09:36.062641837Z" level=error msg="encountered an error cleaning up failed sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.063000 containerd[1627]: time="2026-01-24T03:09:36.062798760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cd87fbdf-87r2w,Uid:7b9a31a8-5cc7-4ee4-9145-620e764b84d5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.063622 kubelet[2868]: E0124 03:09:36.063099 2868 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.063622 kubelet[2868]: E0124 03:09:36.063183 2868 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" Jan 24 03:09:36.063622 kubelet[2868]: E0124 03:09:36.063215 2868 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" Jan 24 03:09:36.065343 kubelet[2868]: E0124 03:09:36.063276 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64cd87fbdf-87r2w_calico-system(7b9a31a8-5cc7-4ee4-9145-620e764b84d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64cd87fbdf-87r2w_calico-system(7b9a31a8-5cc7-4ee4-9145-620e764b84d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:09:36.067620 containerd[1627]: time="2026-01-24T03:09:36.067456140Z" level=error msg="Failed to destroy network for sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.068560 containerd[1627]: time="2026-01-24T03:09:36.068391737Z" level=error msg="encountered an error cleaning up failed sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.068871 containerd[1627]: time="2026-01-24T03:09:36.068770669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569dd98ffb-4br8n,Uid:00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.070134 kubelet[2868]: E0124 03:09:36.070086 2868 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.070225 kubelet[2868]: E0124 03:09:36.070152 2868 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" Jan 24 03:09:36.070225 kubelet[2868]: E0124 03:09:36.070183 2868 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" Jan 24 03:09:36.070343 kubelet[2868]: E0124 03:09:36.070256 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-569dd98ffb-4br8n_calico-apiserver(00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-569dd98ffb-4br8n_calico-apiserver(00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:09:36.143274 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28-shm.mount: Deactivated successfully. Jan 24 03:09:36.417926 kubelet[2868]: I0124 03:09:36.417867 2868 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:36.423772 kubelet[2868]: I0124 03:09:36.423720 2868 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:36.432339 kubelet[2868]: I0124 03:09:36.431962 2868 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:36.434960 kubelet[2868]: I0124 03:09:36.434918 2868 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:36.450338 containerd[1627]: time="2026-01-24T03:09:36.449983155Z" level=info msg="StopPodSandbox for \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\"" Jan 24 03:09:36.452624 containerd[1627]: time="2026-01-24T03:09:36.452277467Z" level=info msg="StopPodSandbox for \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\"" Jan 24 03:09:36.452624 containerd[1627]: time="2026-01-24T03:09:36.452513997Z" level=info msg="Ensure that sandbox d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972 in task-service has been cleanup successfully" Jan 24 03:09:36.453341 containerd[1627]: time="2026-01-24T03:09:36.453278187Z" level=info msg="Ensure that sandbox b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc in task-service has been cleanup successfully" Jan 24 03:09:36.456283 containerd[1627]: time="2026-01-24T03:09:36.455739227Z" level=info msg="StopPodSandbox for \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\"" Jan 24 03:09:36.456283 containerd[1627]: time="2026-01-24T03:09:36.455955914Z" level=info msg="Ensure that sandbox da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31 in task-service has been cleanup successfully" Jan 24 03:09:36.458950 containerd[1627]: time="2026-01-24T03:09:36.458908954Z" level=info msg="StopPodSandbox for \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\"" Jan 24 03:09:36.459214 containerd[1627]: time="2026-01-24T03:09:36.459181501Z" level=info msg="Ensure that sandbox 721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf in task-service has been cleanup successfully" Jan 24 03:09:36.462314 kubelet[2868]: I0124 03:09:36.462093 2868 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:36.462983 containerd[1627]: time="2026-01-24T03:09:36.462936520Z" level=info msg="StopPodSandbox for \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\"" Jan 24 03:09:36.466809 containerd[1627]: time="2026-01-24T03:09:36.466764626Z" level=info msg="Ensure that sandbox 4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc in task-service has been cleanup successfully" Jan 24 03:09:36.479629 kubelet[2868]: I0124 03:09:36.479165 2868 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:36.483133 containerd[1627]: time="2026-01-24T03:09:36.483072978Z" level=info msg="StopPodSandbox for \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\"" Jan 24 03:09:36.489176 containerd[1627]: time="2026-01-24T03:09:36.489124246Z" level=info msg="Ensure that sandbox edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69 in task-service has been cleanup successfully" Jan 24 03:09:36.492179 kubelet[2868]: I0124 03:09:36.492140 2868 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:36.500631 containerd[1627]: time="2026-01-24T03:09:36.500322655Z" level=info msg="StopPodSandbox for \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\"" Jan 24 03:09:36.501259 containerd[1627]: time="2026-01-24T03:09:36.500947591Z" level=info msg="Ensure that sandbox 5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28 in task-service has been cleanup successfully" Jan 24 03:09:36.512626 kubelet[2868]: I0124 03:09:36.511631 2868 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:36.515667 containerd[1627]: time="2026-01-24T03:09:36.515626331Z" level=info msg="StopPodSandbox for \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\"" Jan 24 03:09:36.516108 containerd[1627]: time="2026-01-24T03:09:36.516069701Z" level=info msg="Ensure that sandbox e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67 in task-service has been cleanup successfully" Jan 24 03:09:36.603802 containerd[1627]: time="2026-01-24T03:09:36.603726205Z" level=error msg="StopPodSandbox for \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\" failed" error="failed to destroy network for sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.604463 kubelet[2868]: E0124 03:09:36.604149 2868 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:36.604463 kubelet[2868]: E0124 03:09:36.604245 2868 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf"} Jan 24 03:09:36.604463 kubelet[2868]: E0124 03:09:36.604358 2868 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b9a31a8-5cc7-4ee4-9145-620e764b84d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 03:09:36.604463 kubelet[2868]: E0124 03:09:36.604394 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b9a31a8-5cc7-4ee4-9145-620e764b84d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:09:36.647621 containerd[1627]: time="2026-01-24T03:09:36.647203549Z" level=error msg="StopPodSandbox for \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\" failed" error="failed to destroy network for sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.648634 kubelet[2868]: E0124 03:09:36.648006 2868 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:36.648634 kubelet[2868]: E0124 03:09:36.648083 2868 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31"} Jan 24 03:09:36.648634 kubelet[2868]: E0124 03:09:36.648139 2868 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 03:09:36.648634 kubelet[2868]: E0124 03:09:36.648174 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:09:36.683651 containerd[1627]: time="2026-01-24T03:09:36.683316895Z" level=error msg="StopPodSandbox for \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\" failed" error="failed to destroy network for sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.685544 kubelet[2868]: E0124 03:09:36.685306 2868 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:36.685544 kubelet[2868]: E0124 03:09:36.685383 2868 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69"} Jan 24 03:09:36.685544 kubelet[2868]: E0124 03:09:36.685435 2868 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76ab4499-021b-4baa-941b-8b5ea5143e46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 03:09:36.685544 kubelet[2868]: E0124 03:09:36.685482 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76ab4499-021b-4baa-941b-8b5ea5143e46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:09:36.718817 containerd[1627]: time="2026-01-24T03:09:36.718719987Z" level=error msg="StopPodSandbox for \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\" failed" error="failed to destroy network for sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.719684 kubelet[2868]: E0124 03:09:36.719213 2868 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:36.719684 kubelet[2868]: E0124 03:09:36.719287 2868 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67"} Jan 24 03:09:36.719684 kubelet[2868]: E0124 03:09:36.719338 2868 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b869920c-5e36-401e-9670-1efb848b70fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 03:09:36.719684 kubelet[2868]: E0124 03:09:36.719390 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b869920c-5e36-401e-9670-1efb848b70fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jvx2v" podUID="b869920c-5e36-401e-9670-1efb848b70fd" Jan 24 03:09:36.737465 containerd[1627]: time="2026-01-24T03:09:36.736479031Z" level=error msg="StopPodSandbox for \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\" failed" error="failed to destroy network for sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.737465 containerd[1627]: time="2026-01-24T03:09:36.736992615Z" level=error msg="StopPodSandbox for \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\" failed" error="failed to destroy network for sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.737914 kubelet[2868]: E0124 03:09:36.736886 2868 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:36.737914 kubelet[2868]: E0124 03:09:36.736955 2868 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972"} Jan 24 03:09:36.737914 kubelet[2868]: E0124 03:09:36.737013 2868 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b46b6c51-14b1-4c45-8faa-d27677477dc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 03:09:36.737914 kubelet[2868]: E0124 03:09:36.737050 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b46b6c51-14b1-4c45-8faa-d27677477dc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:09:36.738363 kubelet[2868]: E0124 03:09:36.737263 2868 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:36.738363 kubelet[2868]: E0124 03:09:36.737355 2868 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc"} Jan 24 03:09:36.738363 kubelet[2868]: E0124 03:09:36.737391 2868 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94ce2e5d-4660-46c3-961b-bbe64cee7f9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 03:09:36.738363 kubelet[2868]: E0124 03:09:36.737419 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94ce2e5d-4660-46c3-961b-bbe64cee7f9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dpcwb" podUID="94ce2e5d-4660-46c3-961b-bbe64cee7f9e" Jan 24 03:09:36.739724 containerd[1627]: time="2026-01-24T03:09:36.738663472Z" level=error msg="StopPodSandbox for \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\" failed" error="failed to destroy network for sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.739912 kubelet[2868]: E0124 03:09:36.738933 2868 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:36.739912 kubelet[2868]: E0124 03:09:36.739003 2868 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc"} Jan 24 03:09:36.739912 kubelet[2868]: E0124 03:09:36.739039 2868 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3cd30f3-05f1-4f59-9983-53b558455fdb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 03:09:36.739912 kubelet[2868]: E0124 03:09:36.739074 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3cd30f3-05f1-4f59-9983-53b558455fdb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ffff4665c-wr7xc" podUID="e3cd30f3-05f1-4f59-9983-53b558455fdb" Jan 24 03:09:36.740550 containerd[1627]: time="2026-01-24T03:09:36.740415299Z" level=error msg="StopPodSandbox for \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\" failed" error="failed to destroy network for sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 03:09:36.740837 kubelet[2868]: E0124 03:09:36.740769 2868 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:36.740837 kubelet[2868]: E0124 03:09:36.740820 2868 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28"} Jan 24 03:09:36.740962 kubelet[2868]: E0124 03:09:36.740857 2868 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3d4cc92-f20f-4793-8073-7a8fb294fc7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 03:09:36.740962 kubelet[2868]: E0124 03:09:36.740890 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3d4cc92-f20f-4793-8073-7a8fb294fc7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:38.135051 systemd-journald[1177]: Under memory pressure, flushing caches. Jan 24 03:09:38.115801 systemd-resolved[1515]: Under memory pressure, flushing caches. Jan 24 03:09:38.115910 systemd-resolved[1515]: Flushed all caches. Jan 24 03:09:39.285003 kubelet[2868]: I0124 03:09:39.284106 2868 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 03:09:45.739484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764306542.mount: Deactivated successfully. Jan 24 03:09:45.846636 containerd[1627]: time="2026-01-24T03:09:45.845417469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 03:09:45.860158 containerd[1627]: time="2026-01-24T03:09:45.860043072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.435000937s" Jan 24 03:09:45.860761 containerd[1627]: time="2026-01-24T03:09:45.860448026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 03:09:45.887077 containerd[1627]: time="2026-01-24T03:09:45.886438319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:45.971515 containerd[1627]: time="2026-01-24T03:09:45.971448729Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:45.974619 containerd[1627]: time="2026-01-24T03:09:45.974325509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:09:45.991260 containerd[1627]: time="2026-01-24T03:09:45.991194174Z" level=info msg="CreateContainer within sandbox \"144531c1fb56e7e52859baea64dda8ab6d4ffc7d87c2c0c3852c20c828821c1d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 03:09:46.061305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735361623.mount: Deactivated successfully. Jan 24 03:09:46.076426 containerd[1627]: time="2026-01-24T03:09:46.076351946Z" level=info msg="CreateContainer within sandbox \"144531c1fb56e7e52859baea64dda8ab6d4ffc7d87c2c0c3852c20c828821c1d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"66f50179146f3b0c6f731bf3ddb17362451cc0b83da48e97730a4ee37d480679\"" Jan 24 03:09:46.078846 containerd[1627]: time="2026-01-24T03:09:46.078803114Z" level=info msg="StartContainer for \"66f50179146f3b0c6f731bf3ddb17362451cc0b83da48e97730a4ee37d480679\"" Jan 24 03:09:46.489477 containerd[1627]: time="2026-01-24T03:09:46.489378856Z" level=info msg="StartContainer for \"66f50179146f3b0c6f731bf3ddb17362451cc0b83da48e97730a4ee37d480679\" returns successfully" Jan 24 03:09:46.693303 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 03:09:46.693618 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 03:09:47.171935 kubelet[2868]: I0124 03:09:47.162310 2868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2xz9q" podStartSLOduration=2.255984056 podStartE2EDuration="26.13268683s" podCreationTimestamp="2026-01-24 03:09:21 +0000 UTC" firstStartedPulling="2026-01-24 03:09:21.99476893 +0000 UTC m=+26.094016118" lastFinishedPulling="2026-01-24 03:09:45.871471703 +0000 UTC m=+49.970718892" observedRunningTime="2026-01-24 03:09:46.621058797 +0000 UTC m=+50.720305999" watchObservedRunningTime="2026-01-24 03:09:47.13268683 +0000 UTC m=+51.231934027" Jan 24 03:09:47.195967 containerd[1627]: time="2026-01-24T03:09:47.195663533Z" level=info msg="StopPodSandbox for \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\"" Jan 24 03:09:47.629482 systemd[1]: run-containerd-runc-k8s.io-66f50179146f3b0c6f731bf3ddb17362451cc0b83da48e97730a4ee37d480679-runc.80t1Ah.mount: Deactivated successfully. Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.402 [INFO][4096] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.409 [INFO][4096] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" iface="eth0" netns="/var/run/netns/cni-6337df0c-3bb5-077e-7c69-1dbd46877c9b" Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.410 [INFO][4096] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" iface="eth0" netns="/var/run/netns/cni-6337df0c-3bb5-077e-7c69-1dbd46877c9b" Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.411 [INFO][4096] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" iface="eth0" netns="/var/run/netns/cni-6337df0c-3bb5-077e-7c69-1dbd46877c9b" Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.411 [INFO][4096] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.412 [INFO][4096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.755 [INFO][4105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" HandleID="k8s-pod-network.b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.758 [INFO][4105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.759 [INFO][4105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.809 [WARNING][4105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" HandleID="k8s-pod-network.b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.809 [INFO][4105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" HandleID="k8s-pod-network.b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.846 [INFO][4105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:47.855949 containerd[1627]: 2026-01-24 03:09:47.848 [INFO][4096] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:47.860181 systemd[1]: run-netns-cni\x2d6337df0c\x2d3bb5\x2d077e\x2d7c69\x2d1dbd46877c9b.mount: Deactivated successfully. Jan 24 03:09:47.875882 containerd[1627]: time="2026-01-24T03:09:47.875273422Z" level=info msg="TearDown network for sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\" successfully" Jan 24 03:09:47.875882 containerd[1627]: time="2026-01-24T03:09:47.875332597Z" level=info msg="StopPodSandbox for \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\" returns successfully" Jan 24 03:09:47.931585 kubelet[2868]: I0124 03:09:47.931461 2868 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3cd30f3-05f1-4f59-9983-53b558455fdb-whisker-ca-bundle\") pod \"e3cd30f3-05f1-4f59-9983-53b558455fdb\" (UID: \"e3cd30f3-05f1-4f59-9983-53b558455fdb\") " Jan 24 03:09:47.931585 kubelet[2868]: I0124 03:09:47.931583 2868 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q89p7\" (UniqueName: \"kubernetes.io/projected/e3cd30f3-05f1-4f59-9983-53b558455fdb-kube-api-access-q89p7\") pod \"e3cd30f3-05f1-4f59-9983-53b558455fdb\" (UID: \"e3cd30f3-05f1-4f59-9983-53b558455fdb\") " Jan 24 03:09:47.931898 kubelet[2868]: I0124 03:09:47.931694 2868 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3cd30f3-05f1-4f59-9983-53b558455fdb-whisker-backend-key-pair\") pod \"e3cd30f3-05f1-4f59-9983-53b558455fdb\" (UID: \"e3cd30f3-05f1-4f59-9983-53b558455fdb\") " Jan 24 03:09:47.941357 kubelet[2868]: I0124 03:09:47.938773 2868 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3cd30f3-05f1-4f59-9983-53b558455fdb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e3cd30f3-05f1-4f59-9983-53b558455fdb" (UID: "e3cd30f3-05f1-4f59-9983-53b558455fdb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 03:09:47.953791 systemd[1]: var-lib-kubelet-pods-e3cd30f3\x2d05f1\x2d4f59\x2d9983\x2d53b558455fdb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq89p7.mount: Deactivated successfully. Jan 24 03:09:47.958212 kubelet[2868]: I0124 03:09:47.957461 2868 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cd30f3-05f1-4f59-9983-53b558455fdb-kube-api-access-q89p7" (OuterVolumeSpecName: "kube-api-access-q89p7") pod "e3cd30f3-05f1-4f59-9983-53b558455fdb" (UID: "e3cd30f3-05f1-4f59-9983-53b558455fdb"). InnerVolumeSpecName "kube-api-access-q89p7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 03:09:47.967891 kubelet[2868]: I0124 03:09:47.967842 2868 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cd30f3-05f1-4f59-9983-53b558455fdb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e3cd30f3-05f1-4f59-9983-53b558455fdb" (UID: "e3cd30f3-05f1-4f59-9983-53b558455fdb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 03:09:47.969678 systemd[1]: var-lib-kubelet-pods-e3cd30f3\x2d05f1\x2d4f59\x2d9983\x2d53b558455fdb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 03:09:48.041913 kubelet[2868]: I0124 03:09:48.040360 2868 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3cd30f3-05f1-4f59-9983-53b558455fdb-whisker-ca-bundle\") on node \"srv-jddbi.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:09:48.042687 kubelet[2868]: I0124 03:09:48.042102 2868 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q89p7\" (UniqueName: \"kubernetes.io/projected/e3cd30f3-05f1-4f59-9983-53b558455fdb-kube-api-access-q89p7\") on node \"srv-jddbi.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:09:48.042687 kubelet[2868]: I0124 03:09:48.042143 2868 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3cd30f3-05f1-4f59-9983-53b558455fdb-whisker-backend-key-pair\") on node \"srv-jddbi.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:09:48.125925 containerd[1627]: time="2026-01-24T03:09:48.125815672Z" level=info msg="StopPodSandbox for \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\"" Jan 24 03:09:48.127397 containerd[1627]: time="2026-01-24T03:09:48.127096317Z" level=info msg="StopPodSandbox for \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\"" Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.239 [INFO][4161] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.242 [INFO][4161] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" iface="eth0" netns="/var/run/netns/cni-dec2bdb3-1edf-9f9d-3cec-2f2494bb24c6" Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.243 [INFO][4161] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" iface="eth0" netns="/var/run/netns/cni-dec2bdb3-1edf-9f9d-3cec-2f2494bb24c6" Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.245 [INFO][4161] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" iface="eth0" netns="/var/run/netns/cni-dec2bdb3-1edf-9f9d-3cec-2f2494bb24c6" Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.245 [INFO][4161] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.245 [INFO][4161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.321 [INFO][4177] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" HandleID="k8s-pod-network.5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.322 [INFO][4177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.322 [INFO][4177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.355 [WARNING][4177] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" HandleID="k8s-pod-network.5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.357 [INFO][4177] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" HandleID="k8s-pod-network.5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.370 [INFO][4177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:48.395825 containerd[1627]: 2026-01-24 03:09:48.383 [INFO][4161] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:48.402874 containerd[1627]: time="2026-01-24T03:09:48.396254514Z" level=info msg="TearDown network for sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\" successfully" Jan 24 03:09:48.402874 containerd[1627]: time="2026-01-24T03:09:48.396322793Z" level=info msg="StopPodSandbox for \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\" returns successfully" Jan 24 03:09:48.403362 containerd[1627]: time="2026-01-24T03:09:48.403108668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rk5p,Uid:c3d4cc92-f20f-4793-8073-7a8fb294fc7f,Namespace:calico-system,Attempt:1,}" Jan 24 03:09:48.406268 systemd[1]: run-netns-cni\x2ddec2bdb3\x2d1edf\x2d9f9d\x2d3cec\x2d2f2494bb24c6.mount: Deactivated successfully. Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.264 [INFO][4165] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.265 [INFO][4165] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" iface="eth0" netns="/var/run/netns/cni-0518cace-faae-0aa2-6582-947a396746ee" Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.265 [INFO][4165] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" iface="eth0" netns="/var/run/netns/cni-0518cace-faae-0aa2-6582-947a396746ee" Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.266 [INFO][4165] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" iface="eth0" netns="/var/run/netns/cni-0518cace-faae-0aa2-6582-947a396746ee" Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.266 [INFO][4165] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.266 [INFO][4165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.341 [INFO][4182] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" HandleID="k8s-pod-network.edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.344 [INFO][4182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.371 [INFO][4182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.410 [WARNING][4182] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" HandleID="k8s-pod-network.edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.412 [INFO][4182] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" HandleID="k8s-pod-network.edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.424 [INFO][4182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:48.442862 containerd[1627]: 2026-01-24 03:09:48.432 [INFO][4165] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:48.444872 containerd[1627]: time="2026-01-24T03:09:48.443040249Z" level=info msg="TearDown network for sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\" successfully" Jan 24 03:09:48.444872 containerd[1627]: time="2026-01-24T03:09:48.443097708Z" level=info msg="StopPodSandbox for \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\" returns successfully" Jan 24 03:09:48.451658 containerd[1627]: time="2026-01-24T03:09:48.449816460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569dd98ffb-zpcp9,Uid:76ab4499-021b-4baa-941b-8b5ea5143e46,Namespace:calico-apiserver,Attempt:1,}" Jan 24 03:09:48.854790 kubelet[2868]: I0124 03:09:48.854022 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d78211da-ca25-4f3e-be35-f78b1336c756-whisker-ca-bundle\") pod \"whisker-775d9ff4d9-p47mr\" (UID: \"d78211da-ca25-4f3e-be35-f78b1336c756\") " pod="calico-system/whisker-775d9ff4d9-p47mr" Jan 24 03:09:48.854790 kubelet[2868]: I0124 03:09:48.854133 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fndp\" (UniqueName: \"kubernetes.io/projected/d78211da-ca25-4f3e-be35-f78b1336c756-kube-api-access-5fndp\") pod \"whisker-775d9ff4d9-p47mr\" (UID: \"d78211da-ca25-4f3e-be35-f78b1336c756\") " pod="calico-system/whisker-775d9ff4d9-p47mr" Jan 24 03:09:48.854790 kubelet[2868]: I0124 03:09:48.854197 2868 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d78211da-ca25-4f3e-be35-f78b1336c756-whisker-backend-key-pair\") pod \"whisker-775d9ff4d9-p47mr\" (UID: \"d78211da-ca25-4f3e-be35-f78b1336c756\") " pod="calico-system/whisker-775d9ff4d9-p47mr" Jan 24 03:09:48.868260 systemd[1]: run-netns-cni\x2d0518cace\x2dfaae\x2d0aa2\x2d6582\x2d947a396746ee.mount: Deactivated successfully. Jan 24 03:09:49.109623 containerd[1627]: time="2026-01-24T03:09:49.107773764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775d9ff4d9-p47mr,Uid:d78211da-ca25-4f3e-be35-f78b1336c756,Namespace:calico-system,Attempt:0,}" Jan 24 03:09:49.150872 systemd-networkd[1260]: cali220ced87d92: Link UP Jan 24 03:09:49.158692 systemd-networkd[1260]: cali220ced87d92: Gained carrier Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.593 [INFO][4195] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.659 [INFO][4195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0 csi-node-driver- calico-system c3d4cc92-f20f-4793-8073-7a8fb294fc7f 940 0 2026-01-24 03:09:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-jddbi.gb1.brightbox.com csi-node-driver-7rk5p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali220ced87d92 [] [] }} ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Namespace="calico-system" Pod="csi-node-driver-7rk5p" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.659 [INFO][4195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Namespace="calico-system" Pod="csi-node-driver-7rk5p" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.817 [INFO][4236] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" HandleID="k8s-pod-network.15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.818 [INFO][4236] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" HandleID="k8s-pod-network.15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001035c0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-jddbi.gb1.brightbox.com", "pod":"csi-node-driver-7rk5p", "timestamp":"2026-01-24 03:09:48.817940882 +0000 UTC"}, Hostname:"srv-jddbi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.818 [INFO][4236] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.818 [INFO][4236] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.818 [INFO][4236] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jddbi.gb1.brightbox.com' Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.837 [INFO][4236] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.868 [INFO][4236] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:48.960 [INFO][4236] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:49.011 [INFO][4236] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:49.041 [INFO][4236] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:49.042 [INFO][4236] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:49.071 [INFO][4236] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07 Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:49.085 [INFO][4236] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:49.096 [INFO][4236] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.1/26] block=192.168.25.0/26 handle="k8s-pod-network.15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:49.097 [INFO][4236] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.1/26] handle="k8s-pod-network.15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:49.097 [INFO][4236] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:49.322959 containerd[1627]: 2026-01-24 03:09:49.097 [INFO][4236] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.1/26] IPv6=[] ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" HandleID="k8s-pod-network.15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:49.332933 containerd[1627]: 2026-01-24 03:09:49.111 [INFO][4195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Namespace="calico-system" Pod="csi-node-driver-7rk5p" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3d4cc92-f20f-4793-8073-7a8fb294fc7f", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-7rk5p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali220ced87d92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:49.332933 containerd[1627]: 2026-01-24 03:09:49.112 [INFO][4195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.1/32] ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Namespace="calico-system" Pod="csi-node-driver-7rk5p" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:49.332933 containerd[1627]: 2026-01-24 03:09:49.113 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali220ced87d92 ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Namespace="calico-system" Pod="csi-node-driver-7rk5p" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:49.332933 containerd[1627]: 2026-01-24 03:09:49.203 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Namespace="calico-system" Pod="csi-node-driver-7rk5p" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:49.332933 containerd[1627]: 2026-01-24 03:09:49.214 [INFO][4195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Namespace="calico-system" Pod="csi-node-driver-7rk5p" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3d4cc92-f20f-4793-8073-7a8fb294fc7f", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07", Pod:"csi-node-driver-7rk5p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali220ced87d92", MAC:"ee:9e:aa:16:b5:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:49.332933 containerd[1627]: 2026-01-24 03:09:49.289 [INFO][4195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07" Namespace="calico-system" Pod="csi-node-driver-7rk5p" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:49.426124 containerd[1627]: time="2026-01-24T03:09:49.425899129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:49.430804 containerd[1627]: time="2026-01-24T03:09:49.429141558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:49.430804 containerd[1627]: time="2026-01-24T03:09:49.429798061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:49.433228 containerd[1627]: time="2026-01-24T03:09:49.433104456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:49.453351 systemd-networkd[1260]: cali56422ef3f21: Link UP Jan 24 03:09:49.453698 systemd-networkd[1260]: cali56422ef3f21: Gained carrier Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:48.595 [INFO][4200] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:48.658 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0 calico-apiserver-569dd98ffb- calico-apiserver 76ab4499-021b-4baa-941b-8b5ea5143e46 941 0 2026-01-24 03:09:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:569dd98ffb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-jddbi.gb1.brightbox.com calico-apiserver-569dd98ffb-zpcp9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56422ef3f21 [] [] }} ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-zpcp9" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:48.659 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-zpcp9" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:48.823 [INFO][4231] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" HandleID="k8s-pod-network.a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:48.828 [INFO][4231] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" HandleID="k8s-pod-network.a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033d7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-jddbi.gb1.brightbox.com", "pod":"calico-apiserver-569dd98ffb-zpcp9", "timestamp":"2026-01-24 03:09:48.823940025 +0000 UTC"}, Hostname:"srv-jddbi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:48.829 [INFO][4231] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.097 [INFO][4231] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.099 [INFO][4231] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jddbi.gb1.brightbox.com' Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.127 [INFO][4231] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.190 [INFO][4231] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.299 [INFO][4231] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.336 [INFO][4231] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.360 [INFO][4231] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.361 [INFO][4231] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.382 [INFO][4231] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.409 [INFO][4231] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.432 [INFO][4231] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.2/26] block=192.168.25.0/26 handle="k8s-pod-network.a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.432 [INFO][4231] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.2/26] handle="k8s-pod-network.a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.433 [INFO][4231] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:49.475388 containerd[1627]: 2026-01-24 03:09:49.434 [INFO][4231] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.2/26] IPv6=[] ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" HandleID="k8s-pod-network.a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:49.479960 containerd[1627]: 2026-01-24 03:09:49.439 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-zpcp9" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0", GenerateName:"calico-apiserver-569dd98ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"76ab4499-021b-4baa-941b-8b5ea5143e46", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569dd98ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-569dd98ffb-zpcp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56422ef3f21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:49.479960 containerd[1627]: 2026-01-24 03:09:49.440 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.2/32] ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-zpcp9" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:49.479960 containerd[1627]: 2026-01-24 03:09:49.440 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56422ef3f21 ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-zpcp9" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:49.479960 containerd[1627]: 2026-01-24 03:09:49.447 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-zpcp9" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:49.479960 containerd[1627]: 2026-01-24 03:09:49.447 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-zpcp9" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0", GenerateName:"calico-apiserver-569dd98ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"76ab4499-021b-4baa-941b-8b5ea5143e46", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569dd98ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff", Pod:"calico-apiserver-569dd98ffb-zpcp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56422ef3f21", MAC:"ba:07:88:0b:5f:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:49.479960 containerd[1627]: 2026-01-24 03:09:49.471 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-zpcp9" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:49.562436 containerd[1627]: time="2026-01-24T03:09:49.561734628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:49.562436 containerd[1627]: time="2026-01-24T03:09:49.561904844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:49.562436 containerd[1627]: time="2026-01-24T03:09:49.561980786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:49.562436 containerd[1627]: time="2026-01-24T03:09:49.562244055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:49.769009 containerd[1627]: time="2026-01-24T03:09:49.768824780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rk5p,Uid:c3d4cc92-f20f-4793-8073-7a8fb294fc7f,Namespace:calico-system,Attempt:1,} returns sandbox id \"15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07\"" Jan 24 03:09:49.786156 containerd[1627]: time="2026-01-24T03:09:49.785837624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 03:09:49.892071 containerd[1627]: time="2026-01-24T03:09:49.891992046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569dd98ffb-zpcp9,Uid:76ab4499-021b-4baa-941b-8b5ea5143e46,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff\"" Jan 24 03:09:49.942292 systemd-networkd[1260]: cali4b7b6ec07d9: Link UP Jan 24 03:09:49.945963 systemd-networkd[1260]: cali4b7b6ec07d9: Gained carrier Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.442 [INFO][4311] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.484 [INFO][4311] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0 whisker-775d9ff4d9- calico-system d78211da-ca25-4f3e-be35-f78b1336c756 959 0 2026-01-24 03:09:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:775d9ff4d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-jddbi.gb1.brightbox.com whisker-775d9ff4d9-p47mr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4b7b6ec07d9 [] [] }} ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Namespace="calico-system" Pod="whisker-775d9ff4d9-p47mr" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.486 [INFO][4311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Namespace="calico-system" Pod="whisker-775d9ff4d9-p47mr" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.781 [INFO][4388] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" HandleID="k8s-pod-network.b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.781 [INFO][4388] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" HandleID="k8s-pod-network.b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ee270), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-jddbi.gb1.brightbox.com", "pod":"whisker-775d9ff4d9-p47mr", "timestamp":"2026-01-24 03:09:49.781095886 +0000 UTC"}, Hostname:"srv-jddbi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.782 [INFO][4388] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.782 [INFO][4388] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.782 [INFO][4388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jddbi.gb1.brightbox.com' Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.823 [INFO][4388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.841 [INFO][4388] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.869 [INFO][4388] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.887 [INFO][4388] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.897 [INFO][4388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.899 [INFO][4388] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.906 [INFO][4388] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.918 [INFO][4388] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.932 [INFO][4388] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.3/26] block=192.168.25.0/26 handle="k8s-pod-network.b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.932 [INFO][4388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.3/26] handle="k8s-pod-network.b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.932 [INFO][4388] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:49.979687 containerd[1627]: 2026-01-24 03:09:49.932 [INFO][4388] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.3/26] IPv6=[] ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" HandleID="k8s-pod-network.b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0" Jan 24 03:09:49.983588 containerd[1627]: 2026-01-24 03:09:49.935 [INFO][4311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Namespace="calico-system" Pod="whisker-775d9ff4d9-p47mr" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0", GenerateName:"whisker-775d9ff4d9-", Namespace:"calico-system", SelfLink:"", UID:"d78211da-ca25-4f3e-be35-f78b1336c756", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775d9ff4d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"", Pod:"whisker-775d9ff4d9-p47mr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4b7b6ec07d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:49.983588 containerd[1627]: 2026-01-24 03:09:49.935 [INFO][4311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.3/32] ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Namespace="calico-system" Pod="whisker-775d9ff4d9-p47mr" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0" Jan 24 03:09:49.983588 containerd[1627]: 2026-01-24 03:09:49.936 [INFO][4311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b7b6ec07d9 ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Namespace="calico-system" Pod="whisker-775d9ff4d9-p47mr" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0" Jan 24 03:09:49.983588 containerd[1627]: 2026-01-24 03:09:49.945 [INFO][4311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Namespace="calico-system" Pod="whisker-775d9ff4d9-p47mr" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0" Jan 24 03:09:49.983588 containerd[1627]: 2026-01-24 03:09:49.948 [INFO][4311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Namespace="calico-system" Pod="whisker-775d9ff4d9-p47mr" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0", GenerateName:"whisker-775d9ff4d9-", Namespace:"calico-system", SelfLink:"", UID:"d78211da-ca25-4f3e-be35-f78b1336c756", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775d9ff4d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a", Pod:"whisker-775d9ff4d9-p47mr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4b7b6ec07d9", MAC:"ce:89:e4:69:9f:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:49.983588 containerd[1627]: 2026-01-24 03:09:49.976 [INFO][4311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a" Namespace="calico-system" Pod="whisker-775d9ff4d9-p47mr" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-whisker--775d9ff4d9--p47mr-eth0" Jan 24 03:09:50.027213 containerd[1627]: time="2026-01-24T03:09:50.024880472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:50.027213 containerd[1627]: time="2026-01-24T03:09:50.024972064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:50.027213 containerd[1627]: time="2026-01-24T03:09:50.025026851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:50.027213 containerd[1627]: time="2026-01-24T03:09:50.025242661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:50.142323 containerd[1627]: time="2026-01-24T03:09:50.141891689Z" level=info msg="StopPodSandbox for \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\"" Jan 24 03:09:50.142993 containerd[1627]: time="2026-01-24T03:09:50.142960235Z" level=info msg="StopPodSandbox for \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\"" Jan 24 03:09:50.171280 systemd-journald[1177]: Under memory pressure, flushing caches. Jan 24 03:09:50.146710 systemd-resolved[1515]: Under memory pressure, flushing caches. Jan 24 03:09:50.173976 containerd[1627]: time="2026-01-24T03:09:50.143714836Z" level=info msg="StopPodSandbox for \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\"" Jan 24 03:09:50.173976 containerd[1627]: time="2026-01-24T03:09:50.143746801Z" level=info msg="StopPodSandbox for \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\"" Jan 24 03:09:50.173976 containerd[1627]: time="2026-01-24T03:09:50.168025119Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:09:50.146737 systemd-resolved[1515]: Flushed all caches. Jan 24 03:09:50.180434 kubelet[2868]: I0124 03:09:50.180336 2868 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3cd30f3-05f1-4f59-9983-53b558455fdb" path="/var/lib/kubelet/pods/e3cd30f3-05f1-4f59-9983-53b558455fdb/volumes" Jan 24 03:09:50.210443 containerd[1627]: time="2026-01-24T03:09:50.174406326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 03:09:50.210443 containerd[1627]: time="2026-01-24T03:09:50.183301729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 03:09:50.220510 kubelet[2868]: E0124 03:09:50.220429 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 03:09:50.223643 kubelet[2868]: E0124 03:09:50.222622 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 03:09:50.224415 containerd[1627]: time="2026-01-24T03:09:50.224330370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 03:09:50.246575 kubelet[2868]: E0124 03:09:50.246416 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfw7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7rk5p_calico-system(c3d4cc92-f20f-4793-8073-7a8fb294fc7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 03:09:50.405339 systemd-networkd[1260]: cali220ced87d92: Gained IPv6LL Jan 24 03:09:50.566332 containerd[1627]: time="2026-01-24T03:09:50.566186744Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:09:50.573136 containerd[1627]: time="2026-01-24T03:09:50.573053821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 03:09:50.574821 kubelet[2868]: E0124 03:09:50.574337 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:09:50.576787 kubelet[2868]: E0124 03:09:50.575163 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:09:50.576932 containerd[1627]: time="2026-01-24T03:09:50.573387054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 03:09:50.579142 containerd[1627]: time="2026-01-24T03:09:50.578430230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 03:09:50.579453 kubelet[2868]: E0124 03:09:50.575575 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5kqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569dd98ffb-zpcp9_calico-apiserver(76ab4499-021b-4baa-941b-8b5ea5143e46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 03:09:50.582022 kubelet[2868]: E0124 03:09:50.581858 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:09:50.814634 kernel: bpftool[4625]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 03:09:50.907287 containerd[1627]: time="2026-01-24T03:09:50.907211241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775d9ff4d9-p47mr,Uid:d78211da-ca25-4f3e-be35-f78b1336c756,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3a80fa6f98dfaa63ab8d9256d7c254bfa07359e65aeac3023af3e7378a5197a\"" Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.460 [INFO][4540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.473 [INFO][4540] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" iface="eth0" netns="/var/run/netns/cni-820abbfd-47d5-e666-3a71-7ee8b9991a40" Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.473 [INFO][4540] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" iface="eth0" netns="/var/run/netns/cni-820abbfd-47d5-e666-3a71-7ee8b9991a40" Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.478 [INFO][4540] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" iface="eth0" netns="/var/run/netns/cni-820abbfd-47d5-e666-3a71-7ee8b9991a40" Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.478 [INFO][4540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.478 [INFO][4540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.835 [INFO][4568] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" HandleID="k8s-pod-network.721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.835 [INFO][4568] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.835 [INFO][4568] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.894 [WARNING][4568] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" HandleID="k8s-pod-network.721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.894 [INFO][4568] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" HandleID="k8s-pod-network.721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.901 [INFO][4568] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:50.939877 containerd[1627]: 2026-01-24 03:09:50.929 [INFO][4540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:50.968663 containerd[1627]: time="2026-01-24T03:09:50.965720434Z" level=info msg="TearDown network for sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\" successfully" Jan 24 03:09:50.968663 containerd[1627]: time="2026-01-24T03:09:50.965770336Z" level=info msg="StopPodSandbox for \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\" returns successfully" Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.467 [INFO][4541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.480 [INFO][4541] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" iface="eth0" netns="/var/run/netns/cni-add80f09-4a53-a98e-1470-32c152a06d3c" Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.481 [INFO][4541] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" iface="eth0" netns="/var/run/netns/cni-add80f09-4a53-a98e-1470-32c152a06d3c" Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.482 [INFO][4541] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" iface="eth0" netns="/var/run/netns/cni-add80f09-4a53-a98e-1470-32c152a06d3c" Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.482 [INFO][4541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.482 [INFO][4541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.834 [INFO][4570] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" HandleID="k8s-pod-network.d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.845 [INFO][4570] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.903 [INFO][4570] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.918 [WARNING][4570] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" HandleID="k8s-pod-network.d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.918 [INFO][4570] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" HandleID="k8s-pod-network.d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.922 [INFO][4570] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:50.968663 containerd[1627]: 2026-01-24 03:09:50.943 [INFO][4541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:50.975925 systemd[1]: run-netns-cni\x2d820abbfd\x2d47d5\x2de666\x2d3a71\x2d7ee8b9991a40.mount: Deactivated successfully. Jan 24 03:09:50.987624 systemd[1]: run-netns-cni\x2dadd80f09\x2d4a53\x2da98e\x2d1470\x2d32c152a06d3c.mount: Deactivated successfully. Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.763 [INFO][4538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.763 [INFO][4538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" iface="eth0" netns="/var/run/netns/cni-5337ef00-729a-755f-b907-0558f224fb19" Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.772 [INFO][4538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" iface="eth0" netns="/var/run/netns/cni-5337ef00-729a-755f-b907-0558f224fb19" Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.773 [INFO][4538] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" iface="eth0" netns="/var/run/netns/cni-5337ef00-729a-755f-b907-0558f224fb19" Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.773 [INFO][4538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.773 [INFO][4538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.899 [INFO][4612] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" HandleID="k8s-pod-network.4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.899 [INFO][4612] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.922 [INFO][4612] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.955 [WARNING][4612] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" HandleID="k8s-pod-network.4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.955 [INFO][4612] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" HandleID="k8s-pod-network.4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.959 [INFO][4612] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:51.014632 containerd[1627]: 2026-01-24 03:09:50.962 [INFO][4538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:51.016865 containerd[1627]: time="2026-01-24T03:09:51.015707811Z" level=info msg="TearDown network for sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\" successfully" Jan 24 03:09:51.016865 containerd[1627]: time="2026-01-24T03:09:51.016159804Z" level=info msg="StopPodSandbox for \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\" returns successfully" Jan 24 03:09:51.019516 systemd[1]: run-netns-cni\x2d5337ef00\x2d729a\x2d755f\x2db907\x2d0558f224fb19.mount: Deactivated successfully. Jan 24 03:09:51.023010 containerd[1627]: time="2026-01-24T03:09:51.019982201Z" level=info msg="TearDown network for sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\" successfully" Jan 24 03:09:51.023010 containerd[1627]: time="2026-01-24T03:09:51.020019645Z" level=info msg="StopPodSandbox for \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\" returns successfully" Jan 24 03:09:51.023010 containerd[1627]: time="2026-01-24T03:09:51.020286640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cd87fbdf-87r2w,Uid:7b9a31a8-5cc7-4ee4-9145-620e764b84d5,Namespace:calico-system,Attempt:1,}" Jan 24 03:09:51.028623 kubelet[2868]: E0124 03:09:51.026515 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:09:51.029837 containerd[1627]: time="2026-01-24T03:09:51.029781066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-54qqp,Uid:b46b6c51-14b1-4c45-8faa-d27677477dc3,Namespace:calico-system,Attempt:1,}" Jan 24 03:09:51.038698 containerd[1627]: time="2026-01-24T03:09:51.032190391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dpcwb,Uid:94ce2e5d-4660-46c3-961b-bbe64cee7f9e,Namespace:kube-system,Attempt:1,}" Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:50.520 [INFO][4551] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:50.526 [INFO][4551] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" iface="eth0" netns="/var/run/netns/cni-d6436852-0e16-a7cb-db18-26da8190b09b" Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:50.529 [INFO][4551] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" iface="eth0" netns="/var/run/netns/cni-d6436852-0e16-a7cb-db18-26da8190b09b" Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:50.542 [INFO][4551] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" iface="eth0" netns="/var/run/netns/cni-d6436852-0e16-a7cb-db18-26da8190b09b" Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:50.542 [INFO][4551] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:50.542 [INFO][4551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:50.990 [INFO][4583] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" HandleID="k8s-pod-network.da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:50.990 [INFO][4583] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:50.990 [INFO][4583] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:51.016 [WARNING][4583] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" HandleID="k8s-pod-network.da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:51.016 [INFO][4583] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" HandleID="k8s-pod-network.da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:51.025 [INFO][4583] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:51.057309 containerd[1627]: 2026-01-24 03:09:51.050 [INFO][4551] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:51.061580 containerd[1627]: time="2026-01-24T03:09:51.061505472Z" level=info msg="TearDown network for sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\" successfully" Jan 24 03:09:51.061934 containerd[1627]: time="2026-01-24T03:09:51.061901584Z" level=info msg="StopPodSandbox for \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\" returns successfully" Jan 24 03:09:51.065820 containerd[1627]: time="2026-01-24T03:09:51.065778749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569dd98ffb-4br8n,Uid:00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c,Namespace:calico-apiserver,Attempt:1,}" Jan 24 03:09:51.067016 systemd[1]: run-netns-cni\x2dd6436852\x2d0e16\x2da7cb\x2ddb18\x2d26da8190b09b.mount: Deactivated successfully. Jan 24 03:09:51.107375 systemd-networkd[1260]: cali56422ef3f21: Gained IPv6LL Jan 24 03:09:51.176753 systemd-networkd[1260]: cali4b7b6ec07d9: Gained IPv6LL Jan 24 03:09:51.303168 containerd[1627]: time="2026-01-24T03:09:51.303112186Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:09:51.306772 containerd[1627]: time="2026-01-24T03:09:51.306334471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 03:09:51.307328 containerd[1627]: time="2026-01-24T03:09:51.306746932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 03:09:51.309667 kubelet[2868]: E0124 03:09:51.309303 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 03:09:51.309667 kubelet[2868]: E0124 03:09:51.309388 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 03:09:51.313809 containerd[1627]: time="2026-01-24T03:09:51.313074446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 03:09:51.317492 kubelet[2868]: E0124 03:09:51.317405 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfw7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7rk5p_calico-system(c3d4cc92-f20f-4793-8073-7a8fb294fc7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 03:09:51.321848 kubelet[2868]: E0124 03:09:51.320730 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:51.519375 systemd-networkd[1260]: vxlan.calico: Link UP Jan 24 03:09:51.519385 systemd-networkd[1260]: vxlan.calico: Gained carrier Jan 24 03:09:51.718011 systemd-networkd[1260]: cali1e4aa935ab7: Link UP Jan 24 03:09:51.722243 systemd-networkd[1260]: cali1e4aa935ab7: Gained carrier Jan 24 03:09:51.804028 containerd[1627]: time="2026-01-24T03:09:51.800935970Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:09:51.808641 containerd[1627]: time="2026-01-24T03:09:51.807413261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 03:09:51.808641 containerd[1627]: time="2026-01-24T03:09:51.807538964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 03:09:51.808795 kubelet[2868]: E0124 03:09:51.808183 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 03:09:51.808795 kubelet[2868]: E0124 03:09:51.808312 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 03:09:51.810421 kubelet[2868]: E0124 03:09:51.809300 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:59dbfedf34134529b60f39f05b808eb2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775d9ff4d9-p47mr_calico-system(d78211da-ca25-4f3e-be35-f78b1336c756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 03:09:51.818919 containerd[1627]: time="2026-01-24T03:09:51.817226277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.369 [INFO][4639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0 coredns-668d6bf9bc- kube-system 94ce2e5d-4660-46c3-961b-bbe64cee7f9e 985 0 2026-01-24 03:09:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-jddbi.gb1.brightbox.com coredns-668d6bf9bc-dpcwb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e4aa935ab7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Namespace="kube-system" Pod="coredns-668d6bf9bc-dpcwb" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.371 [INFO][4639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Namespace="kube-system" Pod="coredns-668d6bf9bc-dpcwb" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.513 [INFO][4703] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" HandleID="k8s-pod-network.2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.513 [INFO][4703] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" HandleID="k8s-pod-network.2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb6d0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-jddbi.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-dpcwb", "timestamp":"2026-01-24 03:09:51.513212039 +0000 UTC"}, Hostname:"srv-jddbi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.513 [INFO][4703] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.513 [INFO][4703] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.513 [INFO][4703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jddbi.gb1.brightbox.com' Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.563 [INFO][4703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.583 [INFO][4703] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.610 [INFO][4703] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.618 [INFO][4703] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.621 [INFO][4703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.621 [INFO][4703] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.624 [INFO][4703] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.633 [INFO][4703] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.647 [INFO][4703] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.4/26] block=192.168.25.0/26 handle="k8s-pod-network.2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.647 [INFO][4703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.4/26] handle="k8s-pod-network.2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.647 [INFO][4703] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:51.850884 containerd[1627]: 2026-01-24 03:09:51.647 [INFO][4703] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.4/26] IPv6=[] ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" HandleID="k8s-pod-network.2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:51.855905 containerd[1627]: 2026-01-24 03:09:51.678 [INFO][4639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Namespace="kube-system" Pod="coredns-668d6bf9bc-dpcwb" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94ce2e5d-4660-46c3-961b-bbe64cee7f9e", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-dpcwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e4aa935ab7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:51.855905 containerd[1627]: 2026-01-24 03:09:51.678 [INFO][4639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.4/32] ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Namespace="kube-system" Pod="coredns-668d6bf9bc-dpcwb" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:51.855905 containerd[1627]: 2026-01-24 03:09:51.678 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e4aa935ab7 ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Namespace="kube-system" Pod="coredns-668d6bf9bc-dpcwb" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:51.855905 containerd[1627]: 2026-01-24 03:09:51.721 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Namespace="kube-system" Pod="coredns-668d6bf9bc-dpcwb" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:51.855905 containerd[1627]: 2026-01-24 03:09:51.736 [INFO][4639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Namespace="kube-system" Pod="coredns-668d6bf9bc-dpcwb" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94ce2e5d-4660-46c3-961b-bbe64cee7f9e", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def", Pod:"coredns-668d6bf9bc-dpcwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e4aa935ab7", MAC:"7e:30:2b:80:51:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:51.855905 containerd[1627]: 2026-01-24 03:09:51.813 [INFO][4639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def" Namespace="kube-system" Pod="coredns-668d6bf9bc-dpcwb" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:51.992673 systemd-networkd[1260]: calib073ff2bd76: Link UP Jan 24 03:09:52.017521 systemd-networkd[1260]: calib073ff2bd76: Gained carrier Jan 24 03:09:52.031317 containerd[1627]: time="2026-01-24T03:09:52.031180366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:52.031836 containerd[1627]: time="2026-01-24T03:09:52.031285154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:52.031836 containerd[1627]: time="2026-01-24T03:09:52.031307423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:52.031836 containerd[1627]: time="2026-01-24T03:09:52.031465181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:52.065913 kubelet[2868]: E0124 03:09:52.065477 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:09:52.081525 kubelet[2868]: E0124 03:09:52.077348 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:09:52.138656 containerd[1627]: time="2026-01-24T03:09:52.138517086Z" level=info msg="StopPodSandbox for \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\"" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.563 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0 calico-kube-controllers-64cd87fbdf- calico-system 7b9a31a8-5cc7-4ee4-9145-620e764b84d5 980 0 2026-01-24 03:09:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64cd87fbdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-jddbi.gb1.brightbox.com calico-kube-controllers-64cd87fbdf-87r2w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib073ff2bd76 [] [] }} ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Namespace="calico-system" Pod="calico-kube-controllers-64cd87fbdf-87r2w" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.564 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Namespace="calico-system" Pod="calico-kube-controllers-64cd87fbdf-87r2w" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.762 [INFO][4729] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" HandleID="k8s-pod-network.3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.764 [INFO][4729] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" HandleID="k8s-pod-network.3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-jddbi.gb1.brightbox.com", "pod":"calico-kube-controllers-64cd87fbdf-87r2w", "timestamp":"2026-01-24 03:09:51.762536458 +0000 UTC"}, Hostname:"srv-jddbi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.768 [INFO][4729] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.768 [INFO][4729] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.768 [INFO][4729] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jddbi.gb1.brightbox.com' Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.840 [INFO][4729] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.858 [INFO][4729] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.900 [INFO][4729] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.914 [INFO][4729] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.919 [INFO][4729] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.920 [INFO][4729] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.924 [INFO][4729] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316 Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.936 [INFO][4729] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.957 [INFO][4729] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.5/26] block=192.168.25.0/26 handle="k8s-pod-network.3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.960 [INFO][4729] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.5/26] handle="k8s-pod-network.3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.960 [INFO][4729] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:52.182778 containerd[1627]: 2026-01-24 03:09:51.960 [INFO][4729] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.5/26] IPv6=[] ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" HandleID="k8s-pod-network.3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:52.184576 containerd[1627]: 2026-01-24 03:09:51.975 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Namespace="calico-system" Pod="calico-kube-controllers-64cd87fbdf-87r2w" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0", GenerateName:"calico-kube-controllers-64cd87fbdf-", Namespace:"calico-system", SelfLink:"", UID:"7b9a31a8-5cc7-4ee4-9145-620e764b84d5", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cd87fbdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-64cd87fbdf-87r2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib073ff2bd76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:52.184576 containerd[1627]: 2026-01-24 03:09:51.976 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.5/32] ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Namespace="calico-system" Pod="calico-kube-controllers-64cd87fbdf-87r2w" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:52.184576 containerd[1627]: 2026-01-24 03:09:51.976 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib073ff2bd76 ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Namespace="calico-system" Pod="calico-kube-controllers-64cd87fbdf-87r2w" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:52.184576 containerd[1627]: 2026-01-24 03:09:52.031 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Namespace="calico-system" Pod="calico-kube-controllers-64cd87fbdf-87r2w" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:52.184576 containerd[1627]: 2026-01-24 03:09:52.056 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Namespace="calico-system" Pod="calico-kube-controllers-64cd87fbdf-87r2w" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0", GenerateName:"calico-kube-controllers-64cd87fbdf-", Namespace:"calico-system", SelfLink:"", UID:"7b9a31a8-5cc7-4ee4-9145-620e764b84d5", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cd87fbdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316", Pod:"calico-kube-controllers-64cd87fbdf-87r2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib073ff2bd76", MAC:"76:36:7f:15:aa:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:52.184576 containerd[1627]: 2026-01-24 03:09:52.119 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316" Namespace="calico-system" Pod="calico-kube-controllers-64cd87fbdf-87r2w" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:52.202535 containerd[1627]: time="2026-01-24T03:09:52.198147611Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:09:52.235354 containerd[1627]: time="2026-01-24T03:09:52.231410573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 03:09:52.235354 containerd[1627]: time="2026-01-24T03:09:52.231540213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 03:09:52.235690 kubelet[2868]: E0124 03:09:52.232944 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 03:09:52.235690 kubelet[2868]: E0124 03:09:52.233008 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 03:09:52.235690 kubelet[2868]: E0124 03:09:52.233192 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775d9ff4d9-p47mr_calico-system(d78211da-ca25-4f3e-be35-f78b1336c756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 03:09:52.239763 kubelet[2868]: E0124 03:09:52.236346 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775d9ff4d9-p47mr" podUID="d78211da-ca25-4f3e-be35-f78b1336c756" Jan 24 03:09:52.319392 systemd-networkd[1260]: cali2a620102199: Link UP Jan 24 03:09:52.328338 systemd-networkd[1260]: cali2a620102199: Gained carrier Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:51.551 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0 goldmane-666569f655- calico-system b46b6c51-14b1-4c45-8faa-d27677477dc3 981 0 2026-01-24 03:09:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-jddbi.gb1.brightbox.com goldmane-666569f655-54qqp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2a620102199 [] [] }} ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Namespace="calico-system" Pod="goldmane-666569f655-54qqp" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:51.551 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Namespace="calico-system" Pod="goldmane-666569f655-54qqp" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:51.847 [INFO][4731] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" HandleID="k8s-pod-network.b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:51.848 [INFO][4731] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" HandleID="k8s-pod-network.b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319de0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-jddbi.gb1.brightbox.com", "pod":"goldmane-666569f655-54qqp", "timestamp":"2026-01-24 03:09:51.847949315 +0000 UTC"}, Hostname:"srv-jddbi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:51.848 [INFO][4731] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:51.963 [INFO][4731] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:51.963 [INFO][4731] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jddbi.gb1.brightbox.com' Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.036 [INFO][4731] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.120 [INFO][4731] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.171 [INFO][4731] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.186 [INFO][4731] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.196 [INFO][4731] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.197 [INFO][4731] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.215 [INFO][4731] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543 Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.241 [INFO][4731] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.277 [INFO][4731] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.6/26] block=192.168.25.0/26 handle="k8s-pod-network.b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.277 [INFO][4731] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.6/26] handle="k8s-pod-network.b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.277 [INFO][4731] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:52.383629 containerd[1627]: 2026-01-24 03:09:52.278 [INFO][4731] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.6/26] IPv6=[] ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" HandleID="k8s-pod-network.b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:52.384751 containerd[1627]: 2026-01-24 03:09:52.284 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Namespace="calico-system" Pod="goldmane-666569f655-54qqp" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b46b6c51-14b1-4c45-8faa-d27677477dc3", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-54qqp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a620102199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:52.384751 containerd[1627]: 2026-01-24 03:09:52.288 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.6/32] ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Namespace="calico-system" Pod="goldmane-666569f655-54qqp" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:52.384751 containerd[1627]: 2026-01-24 03:09:52.288 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a620102199 ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Namespace="calico-system" Pod="goldmane-666569f655-54qqp" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:52.384751 containerd[1627]: 2026-01-24 03:09:52.340 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Namespace="calico-system" Pod="goldmane-666569f655-54qqp" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:52.384751 containerd[1627]: 2026-01-24 03:09:52.345 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Namespace="calico-system" Pod="goldmane-666569f655-54qqp" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b46b6c51-14b1-4c45-8faa-d27677477dc3", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543", Pod:"goldmane-666569f655-54qqp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a620102199", MAC:"56:12:0f:09:8a:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:52.384751 containerd[1627]: 2026-01-24 03:09:52.375 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543" Namespace="calico-system" Pod="goldmane-666569f655-54qqp" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:52.505931 containerd[1627]: time="2026-01-24T03:09:52.502655925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:52.505931 containerd[1627]: time="2026-01-24T03:09:52.502736650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:52.505931 containerd[1627]: time="2026-01-24T03:09:52.502759550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:52.505931 containerd[1627]: time="2026-01-24T03:09:52.502917867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:52.515014 containerd[1627]: time="2026-01-24T03:09:52.514443635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dpcwb,Uid:94ce2e5d-4660-46c3-961b-bbe64cee7f9e,Namespace:kube-system,Attempt:1,} returns sandbox id \"2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def\"" Jan 24 03:09:52.546401 containerd[1627]: time="2026-01-24T03:09:52.546349817Z" level=info msg="CreateContainer within sandbox \"2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 03:09:52.562256 systemd-networkd[1260]: calib4cdbd52d77: Link UP Jan 24 03:09:52.569120 systemd-networkd[1260]: calib4cdbd52d77: Gained carrier Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:51.390 [INFO][4654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0 calico-apiserver-569dd98ffb- calico-apiserver 00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c 982 0 2026-01-24 03:09:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:569dd98ffb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-jddbi.gb1.brightbox.com calico-apiserver-569dd98ffb-4br8n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4cdbd52d77 [] [] }} ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-4br8n" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:51.393 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-4br8n" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.177 [INFO][4721] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" HandleID="k8s-pod-network.e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.177 [INFO][4721] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" HandleID="k8s-pod-network.e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122940), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-jddbi.gb1.brightbox.com", "pod":"calico-apiserver-569dd98ffb-4br8n", "timestamp":"2026-01-24 03:09:52.177310723 +0000 UTC"}, Hostname:"srv-jddbi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.178 [INFO][4721] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.281 [INFO][4721] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.283 [INFO][4721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jddbi.gb1.brightbox.com' Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.309 [INFO][4721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.355 [INFO][4721] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.390 [INFO][4721] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.428 [INFO][4721] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.446 [INFO][4721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.446 [INFO][4721] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.456 [INFO][4721] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7 Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.485 [INFO][4721] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.510 [INFO][4721] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.7/26] block=192.168.25.0/26 handle="k8s-pod-network.e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.512 [INFO][4721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.7/26] handle="k8s-pod-network.e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.513 [INFO][4721] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:52.638845 containerd[1627]: 2026-01-24 03:09:52.514 [INFO][4721] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.7/26] IPv6=[] ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" HandleID="k8s-pod-network.e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:52.641924 containerd[1627]: 2026-01-24 03:09:52.547 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-4br8n" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0", GenerateName:"calico-apiserver-569dd98ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569dd98ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-569dd98ffb-4br8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4cdbd52d77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:52.641924 containerd[1627]: 2026-01-24 03:09:52.547 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.7/32] ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-4br8n" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:52.641924 containerd[1627]: 2026-01-24 03:09:52.547 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4cdbd52d77 ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-4br8n" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:52.641924 containerd[1627]: 2026-01-24 03:09:52.571 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-4br8n" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:52.641924 containerd[1627]: 2026-01-24 03:09:52.572 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-4br8n" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0", GenerateName:"calico-apiserver-569dd98ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569dd98ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7", Pod:"calico-apiserver-569dd98ffb-4br8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4cdbd52d77", MAC:"e2:5f:7c:69:2e:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:52.641924 containerd[1627]: 2026-01-24 03:09:52.613 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7" Namespace="calico-apiserver" Pod="calico-apiserver-569dd98ffb-4br8n" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:52.675094 containerd[1627]: time="2026-01-24T03:09:52.673334716Z" level=info msg="CreateContainer within sandbox \"2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d599e554b988171717c86a1482268ccc9a86844d4ed119a7b75508eb59c7d6b\"" Jan 24 03:09:52.681763 containerd[1627]: time="2026-01-24T03:09:52.681721090Z" level=info msg="StartContainer for \"8d599e554b988171717c86a1482268ccc9a86844d4ed119a7b75508eb59c7d6b\"" Jan 24 03:09:52.735691 containerd[1627]: time="2026-01-24T03:09:52.726303027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:52.735691 containerd[1627]: time="2026-01-24T03:09:52.726415853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:52.735691 containerd[1627]: time="2026-01-24T03:09:52.726456636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:52.757319 containerd[1627]: time="2026-01-24T03:09:52.736920782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:52.835030 systemd-networkd[1260]: vxlan.calico: Gained IPv6LL Jan 24 03:09:52.871672 containerd[1627]: time="2026-01-24T03:09:52.869065458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:52.871672 containerd[1627]: time="2026-01-24T03:09:52.869175240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:52.871672 containerd[1627]: time="2026-01-24T03:09:52.869201136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:52.871672 containerd[1627]: time="2026-01-24T03:09:52.869354987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:52.986644 systemd[1]: run-containerd-runc-k8s.io-3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316-runc.i4NDd8.mount: Deactivated successfully. Jan 24 03:09:53.154268 kubelet[2868]: E0124 03:09:53.154200 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775d9ff4d9-p47mr" podUID="d78211da-ca25-4f3e-be35-f78b1336c756" Jan 24 03:09:53.167710 containerd[1627]: time="2026-01-24T03:09:53.167041749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64cd87fbdf-87r2w,Uid:7b9a31a8-5cc7-4ee4-9145-620e764b84d5,Namespace:calico-system,Attempt:1,} returns sandbox id \"3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316\"" Jan 24 03:09:53.190844 containerd[1627]: time="2026-01-24T03:09:53.190171751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 03:09:53.253446 containerd[1627]: time="2026-01-24T03:09:53.250983214Z" level=info msg="StartContainer for \"8d599e554b988171717c86a1482268ccc9a86844d4ed119a7b75508eb59c7d6b\" returns successfully" Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:52.770 [INFO][4807] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:52.770 [INFO][4807] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" iface="eth0" netns="/var/run/netns/cni-41d6bcfc-944b-cb06-2918-0bdbf740a16e" Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:52.774 [INFO][4807] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" iface="eth0" netns="/var/run/netns/cni-41d6bcfc-944b-cb06-2918-0bdbf740a16e" Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:52.778 [INFO][4807] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" iface="eth0" netns="/var/run/netns/cni-41d6bcfc-944b-cb06-2918-0bdbf740a16e" Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:52.778 [INFO][4807] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:52.779 [INFO][4807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:53.208 [INFO][4942] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" HandleID="k8s-pod-network.e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:53.209 [INFO][4942] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:53.209 [INFO][4942] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:53.238 [WARNING][4942] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" HandleID="k8s-pod-network.e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:53.239 [INFO][4942] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" HandleID="k8s-pod-network.e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:53.248 [INFO][4942] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:53.279505 containerd[1627]: 2026-01-24 03:09:53.265 [INFO][4807] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:53.287572 systemd[1]: run-netns-cni\x2d41d6bcfc\x2d944b\x2dcb06\x2d2918\x2d0bdbf740a16e.mount: Deactivated successfully. Jan 24 03:09:53.292973 containerd[1627]: time="2026-01-24T03:09:53.288070213Z" level=info msg="TearDown network for sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\" successfully" Jan 24 03:09:53.296262 containerd[1627]: time="2026-01-24T03:09:53.292231290Z" level=info msg="StopPodSandbox for \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\" returns successfully" Jan 24 03:09:53.303292 containerd[1627]: time="2026-01-24T03:09:53.302880236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jvx2v,Uid:b869920c-5e36-401e-9670-1efb848b70fd,Namespace:kube-system,Attempt:1,}" Jan 24 03:09:53.332672 containerd[1627]: time="2026-01-24T03:09:53.331754890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-54qqp,Uid:b46b6c51-14b1-4c45-8faa-d27677477dc3,Namespace:calico-system,Attempt:1,} returns sandbox id \"b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543\"" Jan 24 03:09:53.520632 containerd[1627]: time="2026-01-24T03:09:53.520366213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-569dd98ffb-4br8n,Uid:00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7\"" Jan 24 03:09:53.567674 containerd[1627]: time="2026-01-24T03:09:53.567375047Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:09:53.578453 containerd[1627]: time="2026-01-24T03:09:53.577936847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 03:09:53.578453 containerd[1627]: time="2026-01-24T03:09:53.578071439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 03:09:53.580701 kubelet[2868]: E0124 03:09:53.580400 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 03:09:53.580701 kubelet[2868]: E0124 03:09:53.580630 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 03:09:53.582408 kubelet[2868]: E0124 03:09:53.581363 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5cht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64cd87fbdf-87r2w_calico-system(7b9a31a8-5cc7-4ee4-9145-620e764b84d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 03:09:53.584716 kubelet[2868]: E0124 03:09:53.583643 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:09:53.588059 containerd[1627]: time="2026-01-24T03:09:53.588003406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 03:09:53.667066 systemd-networkd[1260]: calib4cdbd52d77: Gained IPv6LL Jan 24 03:09:53.730923 systemd-networkd[1260]: cali1e4aa935ab7: Gained IPv6LL Jan 24 03:09:53.835894 systemd-networkd[1260]: calib26ed893dc3: Link UP Jan 24 03:09:53.837966 systemd-networkd[1260]: calib26ed893dc3: Gained carrier Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.566 [INFO][5054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0 coredns-668d6bf9bc- kube-system b869920c-5e36-401e-9670-1efb848b70fd 1029 0 2026-01-24 03:09:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-jddbi.gb1.brightbox.com coredns-668d6bf9bc-jvx2v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib26ed893dc3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Namespace="kube-system" Pod="coredns-668d6bf9bc-jvx2v" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.566 [INFO][5054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Namespace="kube-system" Pod="coredns-668d6bf9bc-jvx2v" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.698 [INFO][5079] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" HandleID="k8s-pod-network.fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.698 [INFO][5079] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" HandleID="k8s-pod-network.fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031fb90), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-jddbi.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-jvx2v", "timestamp":"2026-01-24 03:09:53.69846188 +0000 UTC"}, Hostname:"srv-jddbi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.698 [INFO][5079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.699 [INFO][5079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.699 [INFO][5079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jddbi.gb1.brightbox.com' Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.725 [INFO][5079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.754 [INFO][5079] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.771 [INFO][5079] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.779 [INFO][5079] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.786 [INFO][5079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.787 [INFO][5079] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.792 [INFO][5079] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.799 [INFO][5079] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.819 [INFO][5079] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.8/26] block=192.168.25.0/26 handle="k8s-pod-network.fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.819 [INFO][5079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.8/26] handle="k8s-pod-network.fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" host="srv-jddbi.gb1.brightbox.com" Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.820 [INFO][5079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:53.888806 containerd[1627]: 2026-01-24 03:09:53.820 [INFO][5079] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.8/26] IPv6=[] ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" HandleID="k8s-pod-network.fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:53.898246 containerd[1627]: 2026-01-24 03:09:53.824 [INFO][5054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Namespace="kube-system" Pod="coredns-668d6bf9bc-jvx2v" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b869920c-5e36-401e-9670-1efb848b70fd", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-jvx2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib26ed893dc3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:53.898246 containerd[1627]: 2026-01-24 03:09:53.824 [INFO][5054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.8/32] ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Namespace="kube-system" Pod="coredns-668d6bf9bc-jvx2v" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:53.898246 containerd[1627]: 2026-01-24 03:09:53.824 [INFO][5054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib26ed893dc3 ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Namespace="kube-system" Pod="coredns-668d6bf9bc-jvx2v" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:53.898246 containerd[1627]: 2026-01-24 03:09:53.839 [INFO][5054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Namespace="kube-system" Pod="coredns-668d6bf9bc-jvx2v" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:53.898246 containerd[1627]: 2026-01-24 03:09:53.841 [INFO][5054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Namespace="kube-system" Pod="coredns-668d6bf9bc-jvx2v" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b869920c-5e36-401e-9670-1efb848b70fd", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc", Pod:"coredns-668d6bf9bc-jvx2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib26ed893dc3", MAC:"d2:55:da:af:19:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:53.898246 containerd[1627]: 2026-01-24 03:09:53.870 [INFO][5054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc" Namespace="kube-system" Pod="coredns-668d6bf9bc-jvx2v" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:53.928997 containerd[1627]: time="2026-01-24T03:09:53.928758100Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:09:53.933433 containerd[1627]: time="2026-01-24T03:09:53.933248739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 03:09:53.933433 containerd[1627]: time="2026-01-24T03:09:53.933375915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 03:09:53.937804 kubelet[2868]: E0124 03:09:53.934052 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 03:09:53.937804 kubelet[2868]: E0124 03:09:53.934127 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 03:09:53.937804 kubelet[2868]: E0124 03:09:53.935982 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2544f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-54qqp_calico-system(b46b6c51-14b1-4c45-8faa-d27677477dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 03:09:53.937804 kubelet[2868]: E0124 03:09:53.937403 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:09:53.942785 containerd[1627]: time="2026-01-24T03:09:53.940858969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 03:09:53.988496 systemd-networkd[1260]: calib073ff2bd76: Gained IPv6LL Jan 24 03:09:54.045857 containerd[1627]: time="2026-01-24T03:09:54.042247199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:09:54.045857 containerd[1627]: time="2026-01-24T03:09:54.042355266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:09:54.045857 containerd[1627]: time="2026-01-24T03:09:54.042380599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:54.056296 containerd[1627]: time="2026-01-24T03:09:54.054447770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:09:54.132725 systemd[1]: run-containerd-runc-k8s.io-fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc-runc.1FpkOC.mount: Deactivated successfully. Jan 24 03:09:54.186099 kubelet[2868]: E0124 03:09:54.186014 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:09:54.198967 kubelet[2868]: E0124 03:09:54.198895 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:09:54.244912 systemd-networkd[1260]: cali2a620102199: Gained IPv6LL Jan 24 03:09:54.289249 kubelet[2868]: I0124 03:09:54.289145 2868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dpcwb" podStartSLOduration=52.28910925 podStartE2EDuration="52.28910925s" podCreationTimestamp="2026-01-24 03:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:09:54.242041575 +0000 UTC m=+58.341288794" watchObservedRunningTime="2026-01-24 03:09:54.28910925 +0000 UTC m=+58.388356447" Jan 24 03:09:54.339628 containerd[1627]: time="2026-01-24T03:09:54.337651933Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:09:54.349964 containerd[1627]: time="2026-01-24T03:09:54.349725620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 03:09:54.349964 containerd[1627]: time="2026-01-24T03:09:54.349802994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 03:09:54.352690 kubelet[2868]: E0124 03:09:54.350150 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:09:54.352690 kubelet[2868]: E0124 03:09:54.350284 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:09:54.352690 kubelet[2868]: E0124 03:09:54.351032 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxfv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569dd98ffb-4br8n_calico-apiserver(00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 03:09:54.355773 kubelet[2868]: E0124 03:09:54.353663 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:09:54.373168 containerd[1627]: time="2026-01-24T03:09:54.371914393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jvx2v,Uid:b869920c-5e36-401e-9670-1efb848b70fd,Namespace:kube-system,Attempt:1,} returns sandbox id \"fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc\"" Jan 24 03:09:54.381396 containerd[1627]: time="2026-01-24T03:09:54.380710022Z" level=info msg="CreateContainer within sandbox \"fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 03:09:54.436204 containerd[1627]: time="2026-01-24T03:09:54.435917616Z" level=info msg="CreateContainer within sandbox \"fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"81c63a3381993b44cf7c630753084b4e7c96ec7c5faa9027d93c2190d89bfd05\"" Jan 24 03:09:54.438156 containerd[1627]: time="2026-01-24T03:09:54.437646389Z" level=info msg="StartContainer for \"81c63a3381993b44cf7c630753084b4e7c96ec7c5faa9027d93c2190d89bfd05\"" Jan 24 03:09:54.563671 containerd[1627]: time="2026-01-24T03:09:54.563525426Z" level=info msg="StartContainer for \"81c63a3381993b44cf7c630753084b4e7c96ec7c5faa9027d93c2190d89bfd05\" returns successfully" Jan 24 03:09:54.977790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768644501.mount: Deactivated successfully. Jan 24 03:09:55.216700 kubelet[2868]: E0124 03:09:55.216533 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:09:55.218164 kubelet[2868]: E0124 03:09:55.217659 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:09:55.218164 kubelet[2868]: E0124 03:09:55.217970 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:09:55.253438 kubelet[2868]: I0124 03:09:55.253267 2868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jvx2v" podStartSLOduration=53.253246382 podStartE2EDuration="53.253246382s" podCreationTimestamp="2026-01-24 03:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:09:55.252961021 +0000 UTC m=+59.352208250" watchObservedRunningTime="2026-01-24 03:09:55.253246382 +0000 UTC m=+59.352493578" Jan 24 03:09:55.267667 systemd-networkd[1260]: calib26ed893dc3: Gained IPv6LL Jan 24 03:09:56.086121 containerd[1627]: time="2026-01-24T03:09:56.086059609Z" level=info msg="StopPodSandbox for \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\"" Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.162 [WARNING][5189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0", GenerateName:"calico-apiserver-569dd98ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569dd98ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7", Pod:"calico-apiserver-569dd98ffb-4br8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4cdbd52d77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.163 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.163 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" iface="eth0" netns="" Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.163 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.163 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.196 [INFO][5198] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" HandleID="k8s-pod-network.da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.196 [INFO][5198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.196 [INFO][5198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.215 [WARNING][5198] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" HandleID="k8s-pod-network.da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.215 [INFO][5198] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" HandleID="k8s-pod-network.da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.222 [INFO][5198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:56.231842 containerd[1627]: 2026-01-24 03:09:56.227 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:56.231842 containerd[1627]: time="2026-01-24T03:09:56.231630107Z" level=info msg="TearDown network for sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\" successfully" Jan 24 03:09:56.231842 containerd[1627]: time="2026-01-24T03:09:56.231677565Z" level=info msg="StopPodSandbox for \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\" returns successfully" Jan 24 03:09:56.241585 containerd[1627]: time="2026-01-24T03:09:56.240499711Z" level=info msg="RemovePodSandbox for \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\"" Jan 24 03:09:56.241585 containerd[1627]: time="2026-01-24T03:09:56.240588207Z" level=info msg="Forcibly stopping sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\"" Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.342 [WARNING][5212] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0", GenerateName:"calico-apiserver-569dd98ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569dd98ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"e5d7199eeaca51f9d721868ce7cc10300066688e42db3d020aebb3e84a450ca7", Pod:"calico-apiserver-569dd98ffb-4br8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4cdbd52d77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.342 [INFO][5212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.342 [INFO][5212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" iface="eth0" netns="" Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.342 [INFO][5212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.342 [INFO][5212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.387 [INFO][5220] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" HandleID="k8s-pod-network.da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.387 [INFO][5220] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.388 [INFO][5220] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.407 [WARNING][5220] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" HandleID="k8s-pod-network.da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.407 [INFO][5220] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" HandleID="k8s-pod-network.da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--4br8n-eth0" Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.416 [INFO][5220] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:56.422990 containerd[1627]: 2026-01-24 03:09:56.419 [INFO][5212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31" Jan 24 03:09:56.422990 containerd[1627]: time="2026-01-24T03:09:56.422854219Z" level=info msg="TearDown network for sandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\" successfully" Jan 24 03:09:56.429903 containerd[1627]: time="2026-01-24T03:09:56.429827518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 03:09:56.429994 containerd[1627]: time="2026-01-24T03:09:56.429950381Z" level=info msg="RemovePodSandbox \"da72f56feb3b859210fc3c3399bde3f0a7f82a95786f224f98e6eae9970fcd31\" returns successfully" Jan 24 03:09:56.431280 containerd[1627]: time="2026-01-24T03:09:56.430848850Z" level=info msg="StopPodSandbox for \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\"" Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.483 [WARNING][5234] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.484 [INFO][5234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.484 [INFO][5234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" iface="eth0" netns="" Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.484 [INFO][5234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.484 [INFO][5234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.514 [INFO][5241] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" HandleID="k8s-pod-network.b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.514 [INFO][5241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.515 [INFO][5241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.528 [WARNING][5241] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" HandleID="k8s-pod-network.b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.528 [INFO][5241] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" HandleID="k8s-pod-network.b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.533 [INFO][5241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:56.538263 containerd[1627]: 2026-01-24 03:09:56.536 [INFO][5234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:56.540143 containerd[1627]: time="2026-01-24T03:09:56.538398930Z" level=info msg="TearDown network for sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\" successfully" Jan 24 03:09:56.540143 containerd[1627]: time="2026-01-24T03:09:56.538448298Z" level=info msg="StopPodSandbox for \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\" returns successfully" Jan 24 03:09:56.540143 containerd[1627]: time="2026-01-24T03:09:56.539685744Z" level=info msg="RemovePodSandbox for \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\"" Jan 24 03:09:56.540143 containerd[1627]: time="2026-01-24T03:09:56.539730465Z" level=info msg="Forcibly stopping sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\"" Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.593 [WARNING][5255] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" WorkloadEndpoint="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.593 [INFO][5255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.593 [INFO][5255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" iface="eth0" netns="" Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.593 [INFO][5255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.593 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.643 [INFO][5263] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" HandleID="k8s-pod-network.b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.643 [INFO][5263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.644 [INFO][5263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.661 [WARNING][5263] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" HandleID="k8s-pod-network.b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.661 [INFO][5263] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" HandleID="k8s-pod-network.b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Workload="srv--jddbi.gb1.brightbox.com-k8s-whisker--5ffff4665c--wr7xc-eth0" Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.670 [INFO][5263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:56.675905 containerd[1627]: 2026-01-24 03:09:56.673 [INFO][5255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc" Jan 24 03:09:56.675905 containerd[1627]: time="2026-01-24T03:09:56.675208104Z" level=info msg="TearDown network for sandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\" successfully" Jan 24 03:09:56.680684 containerd[1627]: time="2026-01-24T03:09:56.680637480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 03:09:56.680888 containerd[1627]: time="2026-01-24T03:09:56.680859036Z" level=info msg="RemovePodSandbox \"b221742a328283ef12b54ec2fd2f0cbd65224885b72f6d188419ba9ac89208dc\" returns successfully" Jan 24 03:09:56.681927 containerd[1627]: time="2026-01-24T03:09:56.681883543Z" level=info msg="StopPodSandbox for \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\"" Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.744 [WARNING][5279] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b46b6c51-14b1-4c45-8faa-d27677477dc3", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543", Pod:"goldmane-666569f655-54qqp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a620102199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.745 [INFO][5279] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.745 [INFO][5279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" iface="eth0" netns="" Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.745 [INFO][5279] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.745 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.780 [INFO][5287] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" HandleID="k8s-pod-network.d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.781 [INFO][5287] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.781 [INFO][5287] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.798 [WARNING][5287] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" HandleID="k8s-pod-network.d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.798 [INFO][5287] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" HandleID="k8s-pod-network.d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.812 [INFO][5287] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:56.819910 containerd[1627]: 2026-01-24 03:09:56.815 [INFO][5279] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:56.822928 containerd[1627]: time="2026-01-24T03:09:56.820013968Z" level=info msg="TearDown network for sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\" successfully" Jan 24 03:09:56.822928 containerd[1627]: time="2026-01-24T03:09:56.820066382Z" level=info msg="StopPodSandbox for \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\" returns successfully" Jan 24 03:09:56.822928 containerd[1627]: time="2026-01-24T03:09:56.821161216Z" level=info msg="RemovePodSandbox for \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\"" Jan 24 03:09:56.822928 containerd[1627]: time="2026-01-24T03:09:56.821201375Z" level=info msg="Forcibly stopping sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\"" Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.876 [WARNING][5302] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b46b6c51-14b1-4c45-8faa-d27677477dc3", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"b07d6c924648199e57c98695576dac226d233e4f06fe431fe673e1e872c77543", Pod:"goldmane-666569f655-54qqp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a620102199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.876 [INFO][5302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.876 [INFO][5302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" iface="eth0" netns="" Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.876 [INFO][5302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.877 [INFO][5302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.907 [INFO][5309] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" HandleID="k8s-pod-network.d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.907 [INFO][5309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.907 [INFO][5309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.927 [WARNING][5309] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" HandleID="k8s-pod-network.d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.927 [INFO][5309] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" HandleID="k8s-pod-network.d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Workload="srv--jddbi.gb1.brightbox.com-k8s-goldmane--666569f655--54qqp-eth0" Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.940 [INFO][5309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:56.945909 containerd[1627]: 2026-01-24 03:09:56.943 [INFO][5302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972" Jan 24 03:09:56.945909 containerd[1627]: time="2026-01-24T03:09:56.945754056Z" level=info msg="TearDown network for sandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\" successfully" Jan 24 03:09:56.953155 containerd[1627]: time="2026-01-24T03:09:56.952844198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 03:09:56.953155 containerd[1627]: time="2026-01-24T03:09:56.952934309Z" level=info msg="RemovePodSandbox \"d71b34776ec8c9620f8018e5e489306741b2a683c040c3442de0e86b7d5b8972\" returns successfully" Jan 24 03:09:56.954058 containerd[1627]: time="2026-01-24T03:09:56.953502666Z" level=info msg="StopPodSandbox for \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\"" Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.049 [WARNING][5323] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0", GenerateName:"calico-kube-controllers-64cd87fbdf-", Namespace:"calico-system", SelfLink:"", UID:"7b9a31a8-5cc7-4ee4-9145-620e764b84d5", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cd87fbdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316", Pod:"calico-kube-controllers-64cd87fbdf-87r2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib073ff2bd76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.050 [INFO][5323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.050 [INFO][5323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" iface="eth0" netns="" Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.050 [INFO][5323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.050 [INFO][5323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.080 [INFO][5330] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" HandleID="k8s-pod-network.721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.081 [INFO][5330] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.081 [INFO][5330] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.096 [WARNING][5330] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" HandleID="k8s-pod-network.721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.096 [INFO][5330] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" HandleID="k8s-pod-network.721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.105 [INFO][5330] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:57.109489 containerd[1627]: 2026-01-24 03:09:57.107 [INFO][5323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:57.111236 containerd[1627]: time="2026-01-24T03:09:57.110981815Z" level=info msg="TearDown network for sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\" successfully" Jan 24 03:09:57.111236 containerd[1627]: time="2026-01-24T03:09:57.111036390Z" level=info msg="StopPodSandbox for \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\" returns successfully" Jan 24 03:09:57.112298 containerd[1627]: time="2026-01-24T03:09:57.111786754Z" level=info msg="RemovePodSandbox for \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\"" Jan 24 03:09:57.112298 containerd[1627]: time="2026-01-24T03:09:57.111828556Z" level=info msg="Forcibly stopping sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\"" Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.167 [WARNING][5344] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0", GenerateName:"calico-kube-controllers-64cd87fbdf-", Namespace:"calico-system", SelfLink:"", UID:"7b9a31a8-5cc7-4ee4-9145-620e764b84d5", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64cd87fbdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"3b9288c673235db24f797bcde24bebbf1ccef095c7561831c2ffcc4a1e441316", Pod:"calico-kube-controllers-64cd87fbdf-87r2w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib073ff2bd76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.168 [INFO][5344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.168 [INFO][5344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" iface="eth0" netns="" Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.168 [INFO][5344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.168 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.202 [INFO][5351] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" HandleID="k8s-pod-network.721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.203 [INFO][5351] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.203 [INFO][5351] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.222 [WARNING][5351] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" HandleID="k8s-pod-network.721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.222 [INFO][5351] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" HandleID="k8s-pod-network.721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--kube--controllers--64cd87fbdf--87r2w-eth0" Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.235 [INFO][5351] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:57.239052 containerd[1627]: 2026-01-24 03:09:57.237 [INFO][5344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf" Jan 24 03:09:57.241543 containerd[1627]: time="2026-01-24T03:09:57.239828805Z" level=info msg="TearDown network for sandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\" successfully" Jan 24 03:09:57.252413 containerd[1627]: time="2026-01-24T03:09:57.252259970Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 03:09:57.252413 containerd[1627]: time="2026-01-24T03:09:57.252391762Z" level=info msg="RemovePodSandbox \"721689af1a392b0168696efde86b7f7e233731b91658c8ab97ae5826a17089bf\" returns successfully" Jan 24 03:09:57.253355 containerd[1627]: time="2026-01-24T03:09:57.253309452Z" level=info msg="StopPodSandbox for \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\"" Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.346 [WARNING][5366] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b869920c-5e36-401e-9670-1efb848b70fd", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc", Pod:"coredns-668d6bf9bc-jvx2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib26ed893dc3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.347 [INFO][5366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.347 [INFO][5366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" iface="eth0" netns="" Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.347 [INFO][5366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.347 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.412 [INFO][5373] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" HandleID="k8s-pod-network.e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.413 [INFO][5373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.413 [INFO][5373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.438 [WARNING][5373] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" HandleID="k8s-pod-network.e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.438 [INFO][5373] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" HandleID="k8s-pod-network.e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.452 [INFO][5373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:57.461301 containerd[1627]: 2026-01-24 03:09:57.457 [INFO][5366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:57.469267 containerd[1627]: time="2026-01-24T03:09:57.461376760Z" level=info msg="TearDown network for sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\" successfully" Jan 24 03:09:57.469267 containerd[1627]: time="2026-01-24T03:09:57.461428589Z" level=info msg="StopPodSandbox for \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\" returns successfully" Jan 24 03:09:57.469267 containerd[1627]: time="2026-01-24T03:09:57.464649897Z" level=info msg="RemovePodSandbox for \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\"" Jan 24 03:09:57.469267 containerd[1627]: time="2026-01-24T03:09:57.464691009Z" level=info msg="Forcibly stopping sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\"" Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.567 [WARNING][5387] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b869920c-5e36-401e-9670-1efb848b70fd", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"fdcc9d59c31ccf8ab989760bb336f61494febbd693c7d2b7dc893a1b2fcf9abc", Pod:"coredns-668d6bf9bc-jvx2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib26ed893dc3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.568 [INFO][5387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.568 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" iface="eth0" netns="" Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.568 [INFO][5387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.568 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.626 [INFO][5395] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" HandleID="k8s-pod-network.e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.626 [INFO][5395] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.626 [INFO][5395] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.649 [WARNING][5395] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" HandleID="k8s-pod-network.e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.649 [INFO][5395] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" HandleID="k8s-pod-network.e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jvx2v-eth0" Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.663 [INFO][5395] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:57.669252 containerd[1627]: 2026-01-24 03:09:57.666 [INFO][5387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67" Jan 24 03:09:57.671056 containerd[1627]: time="2026-01-24T03:09:57.669456145Z" level=info msg="TearDown network for sandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\" successfully" Jan 24 03:09:57.676308 containerd[1627]: time="2026-01-24T03:09:57.675982629Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 03:09:57.676308 containerd[1627]: time="2026-01-24T03:09:57.676094869Z" level=info msg="RemovePodSandbox \"e875d0be944d09c063f71f784418d77f616dba1a1b0235df5878304fc7e79d67\" returns successfully" Jan 24 03:09:57.677002 containerd[1627]: time="2026-01-24T03:09:57.676847524Z" level=info msg="StopPodSandbox for \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\"" Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.761 [WARNING][5410] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0", GenerateName:"calico-apiserver-569dd98ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"76ab4499-021b-4baa-941b-8b5ea5143e46", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569dd98ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff", Pod:"calico-apiserver-569dd98ffb-zpcp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56422ef3f21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.762 [INFO][5410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.762 [INFO][5410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" iface="eth0" netns="" Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.763 [INFO][5410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.763 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.817 [INFO][5417] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" HandleID="k8s-pod-network.edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.817 [INFO][5417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.818 [INFO][5417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.832 [WARNING][5417] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" HandleID="k8s-pod-network.edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.832 [INFO][5417] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" HandleID="k8s-pod-network.edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.837 [INFO][5417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:57.842450 containerd[1627]: 2026-01-24 03:09:57.840 [INFO][5410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:57.844696 containerd[1627]: time="2026-01-24T03:09:57.842836463Z" level=info msg="TearDown network for sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\" successfully" Jan 24 03:09:57.844696 containerd[1627]: time="2026-01-24T03:09:57.842902125Z" level=info msg="StopPodSandbox for \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\" returns successfully" Jan 24 03:09:57.844948 containerd[1627]: time="2026-01-24T03:09:57.844902143Z" level=info msg="RemovePodSandbox for \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\"" Jan 24 03:09:57.845012 containerd[1627]: time="2026-01-24T03:09:57.844959806Z" level=info msg="Forcibly stopping sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\"" Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:57.937 [WARNING][5431] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0", GenerateName:"calico-apiserver-569dd98ffb-", Namespace:"calico-apiserver", SelfLink:"", UID:"76ab4499-021b-4baa-941b-8b5ea5143e46", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"569dd98ffb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"a45f7a09104dcbab11b1e9ba99fb59042a73281e37e4ac2e0bac12d0e7eac4ff", Pod:"calico-apiserver-569dd98ffb-zpcp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56422ef3f21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:57.938 [INFO][5431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:57.938 [INFO][5431] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" iface="eth0" netns="" Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:57.938 [INFO][5431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:57.938 [INFO][5431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:57.986 [INFO][5438] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" HandleID="k8s-pod-network.edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:57.987 [INFO][5438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:57.987 [INFO][5438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:57.999 [WARNING][5438] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" HandleID="k8s-pod-network.edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:57.999 [INFO][5438] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" HandleID="k8s-pod-network.edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Workload="srv--jddbi.gb1.brightbox.com-k8s-calico--apiserver--569dd98ffb--zpcp9-eth0" Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:58.004 [INFO][5438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:58.016003 containerd[1627]: 2026-01-24 03:09:58.009 [INFO][5431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69" Jan 24 03:09:58.016003 containerd[1627]: time="2026-01-24T03:09:58.015805067Z" level=info msg="TearDown network for sandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\" successfully" Jan 24 03:09:58.022507 containerd[1627]: time="2026-01-24T03:09:58.022437929Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 03:09:58.023017 containerd[1627]: time="2026-01-24T03:09:58.022522416Z" level=info msg="RemovePodSandbox \"edbc116c2a4408907128cc5a548991fb00cb5d07f0b75eddd5de79046d167f69\" returns successfully" Jan 24 03:09:58.023773 containerd[1627]: time="2026-01-24T03:09:58.023296901Z" level=info msg="StopPodSandbox for \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\"" Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.083 [WARNING][5459] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3d4cc92-f20f-4793-8073-7a8fb294fc7f", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07", Pod:"csi-node-driver-7rk5p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali220ced87d92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.084 [INFO][5459] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.084 [INFO][5459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" iface="eth0" netns="" Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.084 [INFO][5459] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.084 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.119 [INFO][5466] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" HandleID="k8s-pod-network.5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.119 [INFO][5466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.119 [INFO][5466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.137 [WARNING][5466] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" HandleID="k8s-pod-network.5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.137 [INFO][5466] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" HandleID="k8s-pod-network.5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.139 [INFO][5466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:58.143657 containerd[1627]: 2026-01-24 03:09:58.141 [INFO][5459] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:58.145072 containerd[1627]: time="2026-01-24T03:09:58.143739149Z" level=info msg="TearDown network for sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\" successfully" Jan 24 03:09:58.145072 containerd[1627]: time="2026-01-24T03:09:58.143774954Z" level=info msg="StopPodSandbox for \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\" returns successfully" Jan 24 03:09:58.146110 containerd[1627]: time="2026-01-24T03:09:58.145619763Z" level=info msg="RemovePodSandbox for \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\"" Jan 24 03:09:58.146110 containerd[1627]: time="2026-01-24T03:09:58.145660789Z" level=info msg="Forcibly stopping sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\"" Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.201 [WARNING][5480] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3d4cc92-f20f-4793-8073-7a8fb294fc7f", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"15a0640a1e000a0b6afaa097e20fb0f3a6e15af2ddf240307821d6e9a483bd07", Pod:"csi-node-driver-7rk5p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali220ced87d92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.202 [INFO][5480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.202 [INFO][5480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" iface="eth0" netns="" Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.202 [INFO][5480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.202 [INFO][5480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.243 [INFO][5487] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" HandleID="k8s-pod-network.5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.244 [INFO][5487] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.244 [INFO][5487] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.254 [WARNING][5487] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" HandleID="k8s-pod-network.5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.254 [INFO][5487] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" HandleID="k8s-pod-network.5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Workload="srv--jddbi.gb1.brightbox.com-k8s-csi--node--driver--7rk5p-eth0" Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.259 [INFO][5487] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:58.264633 containerd[1627]: 2026-01-24 03:09:58.261 [INFO][5480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28" Jan 24 03:09:58.264633 containerd[1627]: time="2026-01-24T03:09:58.263939267Z" level=info msg="TearDown network for sandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\" successfully" Jan 24 03:09:58.268263 containerd[1627]: time="2026-01-24T03:09:58.268095506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 03:09:58.268263 containerd[1627]: time="2026-01-24T03:09:58.268182384Z" level=info msg="RemovePodSandbox \"5f07abd74d30d66f6505660056a639bdffa74588183af1da0a05e02c98918e28\" returns successfully" Jan 24 03:09:58.270319 containerd[1627]: time="2026-01-24T03:09:58.270286514Z" level=info msg="StopPodSandbox for \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\"" Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.328 [WARNING][5501] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94ce2e5d-4660-46c3-961b-bbe64cee7f9e", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def", Pod:"coredns-668d6bf9bc-dpcwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e4aa935ab7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.328 [INFO][5501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.328 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" iface="eth0" netns="" Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.328 [INFO][5501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.328 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.367 [INFO][5508] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" HandleID="k8s-pod-network.4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.368 [INFO][5508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.368 [INFO][5508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.377 [WARNING][5508] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" HandleID="k8s-pod-network.4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.377 [INFO][5508] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" HandleID="k8s-pod-network.4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.379 [INFO][5508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:58.384478 containerd[1627]: 2026-01-24 03:09:58.381 [INFO][5501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:58.384478 containerd[1627]: time="2026-01-24T03:09:58.383940456Z" level=info msg="TearDown network for sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\" successfully" Jan 24 03:09:58.384478 containerd[1627]: time="2026-01-24T03:09:58.383980403Z" level=info msg="StopPodSandbox for \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\" returns successfully" Jan 24 03:09:58.385306 containerd[1627]: time="2026-01-24T03:09:58.384946796Z" level=info msg="RemovePodSandbox for \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\"" Jan 24 03:09:58.385306 containerd[1627]: time="2026-01-24T03:09:58.384987669Z" level=info msg="Forcibly stopping sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\"" Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.433 [WARNING][5522] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94ce2e5d-4660-46c3-961b-bbe64cee7f9e", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 3, 9, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jddbi.gb1.brightbox.com", ContainerID:"2b66ad238f72cab3be2e25006dad8a8857bd7fc81dee15a0dcfe9c54d3339def", Pod:"coredns-668d6bf9bc-dpcwb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e4aa935ab7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.434 [INFO][5522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.434 [INFO][5522] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" iface="eth0" netns="" Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.434 [INFO][5522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.434 [INFO][5522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.468 [INFO][5529] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" HandleID="k8s-pod-network.4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.468 [INFO][5529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.468 [INFO][5529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.477 [WARNING][5529] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" HandleID="k8s-pod-network.4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.477 [INFO][5529] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" HandleID="k8s-pod-network.4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Workload="srv--jddbi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dpcwb-eth0" Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.479 [INFO][5529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 03:09:58.484418 containerd[1627]: 2026-01-24 03:09:58.481 [INFO][5522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc" Jan 24 03:09:58.484418 containerd[1627]: time="2026-01-24T03:09:58.483322084Z" level=info msg="TearDown network for sandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\" successfully" Jan 24 03:09:58.488052 containerd[1627]: time="2026-01-24T03:09:58.487865274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 03:09:58.488052 containerd[1627]: time="2026-01-24T03:09:58.487935065Z" level=info msg="RemovePodSandbox \"4ba567bc3ab8ee65f80cf7b91230e1dd2b413a9dd2f9814a410c11cb9c7243bc\" returns successfully" Jan 24 03:10:03.125253 containerd[1627]: time="2026-01-24T03:10:03.124814425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 03:10:03.441889 containerd[1627]: time="2026-01-24T03:10:03.441721766Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:03.443537 containerd[1627]: time="2026-01-24T03:10:03.443474090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 03:10:03.443700 containerd[1627]: time="2026-01-24T03:10:03.443646704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 03:10:03.444022 kubelet[2868]: E0124 03:10:03.443955 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:10:03.444798 kubelet[2868]: E0124 03:10:03.444050 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:10:03.444798 kubelet[2868]: E0124 03:10:03.444290 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5kqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569dd98ffb-zpcp9_calico-apiserver(76ab4499-021b-4baa-941b-8b5ea5143e46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:03.446238 kubelet[2868]: E0124 03:10:03.446086 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:10:04.136164 containerd[1627]: time="2026-01-24T03:10:04.135394687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 03:10:04.454125 containerd[1627]: time="2026-01-24T03:10:04.454032666Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:04.455744 containerd[1627]: time="2026-01-24T03:10:04.455678468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 03:10:04.455865 containerd[1627]: time="2026-01-24T03:10:04.455797926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 03:10:04.456626 kubelet[2868]: E0124 03:10:04.456079 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 03:10:04.456626 kubelet[2868]: E0124 03:10:04.456153 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 03:10:04.456626 kubelet[2868]: E0124 03:10:04.456362 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfw7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7rk5p_calico-system(c3d4cc92-f20f-4793-8073-7a8fb294fc7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:04.459466 containerd[1627]: time="2026-01-24T03:10:04.459434846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 03:10:04.782103 containerd[1627]: time="2026-01-24T03:10:04.781886009Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:04.783325 containerd[1627]: time="2026-01-24T03:10:04.783252866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 03:10:04.783588 containerd[1627]: time="2026-01-24T03:10:04.783373274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 03:10:04.783702 kubelet[2868]: E0124 03:10:04.783640 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 03:10:04.783773 kubelet[2868]: E0124 03:10:04.783714 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 03:10:04.784361 kubelet[2868]: E0124 03:10:04.784152 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfw7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7rk5p_calico-system(c3d4cc92-f20f-4793-8073-7a8fb294fc7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:04.785959 kubelet[2868]: E0124 03:10:04.785739 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:10:06.136654 containerd[1627]: time="2026-01-24T03:10:06.136233862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 03:10:06.452887 containerd[1627]: time="2026-01-24T03:10:06.452706295Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:06.456699 containerd[1627]: time="2026-01-24T03:10:06.456626770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 03:10:06.456808 containerd[1627]: time="2026-01-24T03:10:06.456759620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 03:10:06.457097 kubelet[2868]: E0124 03:10:06.456973 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 03:10:06.457097 kubelet[2868]: E0124 03:10:06.457091 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 03:10:06.458809 kubelet[2868]: E0124 03:10:06.457463 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5cht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64cd87fbdf-87r2w_calico-system(7b9a31a8-5cc7-4ee4-9145-620e764b84d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:06.459005 containerd[1627]: time="2026-01-24T03:10:06.457909032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 03:10:06.459559 kubelet[2868]: E0124 03:10:06.459189 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:10:06.778697 containerd[1627]: time="2026-01-24T03:10:06.778454164Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:06.784554 containerd[1627]: time="2026-01-24T03:10:06.784346705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 03:10:06.784554 containerd[1627]: time="2026-01-24T03:10:06.784520282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 03:10:06.785062 kubelet[2868]: E0124 03:10:06.784797 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:10:06.785062 kubelet[2868]: E0124 03:10:06.784939 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:10:06.785908 kubelet[2868]: E0124 03:10:06.785127 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxfv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569dd98ffb-4br8n_calico-apiserver(00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:06.786618 kubelet[2868]: E0124 03:10:06.786338 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:10:08.125341 containerd[1627]: time="2026-01-24T03:10:08.124522353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 03:10:08.457302 containerd[1627]: time="2026-01-24T03:10:08.456992432Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:08.458528 containerd[1627]: time="2026-01-24T03:10:08.458421536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 03:10:08.458528 containerd[1627]: time="2026-01-24T03:10:08.458432558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 03:10:08.459055 kubelet[2868]: E0124 03:10:08.458930 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 03:10:08.459055 kubelet[2868]: E0124 03:10:08.459021 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 03:10:08.459821 kubelet[2868]: E0124 03:10:08.459225 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:59dbfedf34134529b60f39f05b808eb2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775d9ff4d9-p47mr_calico-system(d78211da-ca25-4f3e-be35-f78b1336c756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:08.462855 containerd[1627]: time="2026-01-24T03:10:08.462811140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 03:10:08.790305 containerd[1627]: time="2026-01-24T03:10:08.790109965Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:08.792146 containerd[1627]: time="2026-01-24T03:10:08.792036530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 03:10:08.792232 containerd[1627]: time="2026-01-24T03:10:08.792153969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 03:10:08.792565 kubelet[2868]: E0124 03:10:08.792506 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 03:10:08.792692 kubelet[2868]: E0124 03:10:08.792582 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 03:10:08.792900 kubelet[2868]: E0124 03:10:08.792769 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775d9ff4d9-p47mr_calico-system(d78211da-ca25-4f3e-be35-f78b1336c756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:08.794518 kubelet[2868]: E0124 03:10:08.794437 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775d9ff4d9-p47mr" podUID="d78211da-ca25-4f3e-be35-f78b1336c756" Jan 24 03:10:10.129277 containerd[1627]: time="2026-01-24T03:10:10.127442038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 03:10:10.442505 containerd[1627]: time="2026-01-24T03:10:10.442001021Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:10.443517 containerd[1627]: time="2026-01-24T03:10:10.443380440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 03:10:10.443722 containerd[1627]: time="2026-01-24T03:10:10.443538540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 03:10:10.447008 kubelet[2868]: E0124 03:10:10.444044 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 03:10:10.447008 kubelet[2868]: E0124 03:10:10.444147 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 03:10:10.447008 kubelet[2868]: E0124 03:10:10.444397 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2544f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-54qqp_calico-system(b46b6c51-14b1-4c45-8faa-d27677477dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:10.448159 kubelet[2868]: E0124 03:10:10.447829 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:10:14.572578 systemd[1]: Started sshd@9-10.244.26.234:22-20.161.92.111:51236.service - OpenSSH per-connection server daemon (20.161.92.111:51236). Jan 24 03:10:15.240693 sshd[5579]: Accepted publickey for core from 20.161.92.111 port 51236 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:15.244089 sshd[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:15.268269 systemd-logind[1597]: New session 12 of user core. Jan 24 03:10:15.276234 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 03:10:16.411482 sshd[5579]: pam_unix(sshd:session): session closed for user core Jan 24 03:10:16.428527 systemd[1]: sshd@9-10.244.26.234:22-20.161.92.111:51236.service: Deactivated successfully. Jan 24 03:10:16.431071 systemd-logind[1597]: Session 12 logged out. Waiting for processes to exit. Jan 24 03:10:16.437139 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 03:10:16.439360 systemd-logind[1597]: Removed session 12. Jan 24 03:10:17.125742 kubelet[2868]: E0124 03:10:17.125653 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:10:18.125954 kubelet[2868]: E0124 03:10:18.125887 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:10:19.127511 kubelet[2868]: E0124 03:10:19.127418 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775d9ff4d9-p47mr" podUID="d78211da-ca25-4f3e-be35-f78b1336c756" Jan 24 03:10:20.128590 kubelet[2868]: E0124 03:10:20.128533 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:10:20.132756 kubelet[2868]: E0124 03:10:20.132402 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:10:21.511994 systemd[1]: Started sshd@10-10.244.26.234:22-20.161.92.111:51250.service - OpenSSH per-connection server daemon (20.161.92.111:51250). Jan 24 03:10:22.117289 sshd[5617]: Accepted publickey for core from 20.161.92.111 port 51250 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:22.120745 sshd[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:22.134528 systemd-logind[1597]: New session 13 of user core. Jan 24 03:10:22.142047 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 03:10:22.820016 sshd[5617]: pam_unix(sshd:session): session closed for user core Jan 24 03:10:22.828693 systemd[1]: sshd@10-10.244.26.234:22-20.161.92.111:51250.service: Deactivated successfully. Jan 24 03:10:22.837470 systemd-logind[1597]: Session 13 logged out. Waiting for processes to exit. Jan 24 03:10:22.840152 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 03:10:22.843062 systemd-logind[1597]: Removed session 13. Jan 24 03:10:25.124617 kubelet[2868]: E0124 03:10:25.124461 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:10:27.913970 systemd[1]: Started sshd@11-10.244.26.234:22-20.161.92.111:59698.service - OpenSSH per-connection server daemon (20.161.92.111:59698). Jan 24 03:10:28.533703 sshd[5632]: Accepted publickey for core from 20.161.92.111 port 59698 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:28.536391 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:28.543845 systemd-logind[1597]: New session 14 of user core. Jan 24 03:10:28.548985 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 03:10:29.078687 sshd[5632]: pam_unix(sshd:session): session closed for user core Jan 24 03:10:29.084544 systemd[1]: sshd@11-10.244.26.234:22-20.161.92.111:59698.service: Deactivated successfully. Jan 24 03:10:29.089689 systemd-logind[1597]: Session 14 logged out. Waiting for processes to exit. Jan 24 03:10:29.091162 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 03:10:29.092855 systemd-logind[1597]: Removed session 14. Jan 24 03:10:29.127625 containerd[1627]: time="2026-01-24T03:10:29.125247268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 03:10:29.452556 containerd[1627]: time="2026-01-24T03:10:29.452199895Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:29.453732 containerd[1627]: time="2026-01-24T03:10:29.453649892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 03:10:29.454009 containerd[1627]: time="2026-01-24T03:10:29.453727196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 03:10:29.455663 kubelet[2868]: E0124 03:10:29.454172 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:10:29.455663 kubelet[2868]: E0124 03:10:29.454254 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:10:29.455663 kubelet[2868]: E0124 03:10:29.454813 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5kqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569dd98ffb-zpcp9_calico-apiserver(76ab4499-021b-4baa-941b-8b5ea5143e46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:29.456534 containerd[1627]: time="2026-01-24T03:10:29.455006030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 03:10:29.458176 kubelet[2868]: E0124 03:10:29.457682 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:10:29.781248 containerd[1627]: time="2026-01-24T03:10:29.781073646Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:29.783320 containerd[1627]: time="2026-01-24T03:10:29.783099921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 03:10:29.783320 containerd[1627]: time="2026-01-24T03:10:29.783286176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 03:10:29.783617 kubelet[2868]: E0124 03:10:29.783534 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:10:29.783728 kubelet[2868]: E0124 03:10:29.783639 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:10:29.783905 kubelet[2868]: E0124 03:10:29.783824 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxfv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569dd98ffb-4br8n_calico-apiserver(00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:29.785324 kubelet[2868]: E0124 03:10:29.785240 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:10:30.130179 containerd[1627]: time="2026-01-24T03:10:30.129932172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 03:10:30.454278 containerd[1627]: time="2026-01-24T03:10:30.454117920Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:30.455767 containerd[1627]: time="2026-01-24T03:10:30.455699873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 03:10:30.455873 containerd[1627]: time="2026-01-24T03:10:30.455803628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 03:10:30.456141 kubelet[2868]: E0124 03:10:30.456081 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 03:10:30.456733 kubelet[2868]: E0124 03:10:30.456161 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 03:10:30.456733 kubelet[2868]: E0124 03:10:30.456336 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:59dbfedf34134529b60f39f05b808eb2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775d9ff4d9-p47mr_calico-system(d78211da-ca25-4f3e-be35-f78b1336c756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:30.459477 containerd[1627]: time="2026-01-24T03:10:30.459428911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 03:10:30.768507 containerd[1627]: time="2026-01-24T03:10:30.768236967Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:30.769740 containerd[1627]: time="2026-01-24T03:10:30.769675440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 03:10:30.769865 containerd[1627]: time="2026-01-24T03:10:30.769784898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 03:10:30.770257 kubelet[2868]: E0124 03:10:30.770045 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 03:10:30.770571 kubelet[2868]: E0124 03:10:30.770443 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 03:10:30.771288 kubelet[2868]: E0124 03:10:30.770882 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775d9ff4d9-p47mr_calico-system(d78211da-ca25-4f3e-be35-f78b1336c756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:30.772350 kubelet[2868]: E0124 03:10:30.772298 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775d9ff4d9-p47mr" podUID="d78211da-ca25-4f3e-be35-f78b1336c756" Jan 24 03:10:31.126775 containerd[1627]: time="2026-01-24T03:10:31.126292548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 03:10:31.464299 containerd[1627]: time="2026-01-24T03:10:31.464029830Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:31.465752 containerd[1627]: time="2026-01-24T03:10:31.465704648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 03:10:31.465980 containerd[1627]: time="2026-01-24T03:10:31.465760875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 03:10:31.466347 kubelet[2868]: E0124 03:10:31.466274 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 03:10:31.466867 kubelet[2868]: E0124 03:10:31.466354 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 03:10:31.466867 kubelet[2868]: E0124 03:10:31.466718 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfw7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7rk5p_calico-system(c3d4cc92-f20f-4793-8073-7a8fb294fc7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:31.467909 containerd[1627]: time="2026-01-24T03:10:31.467713541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 03:10:31.777539 containerd[1627]: time="2026-01-24T03:10:31.777170864Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:31.778725 containerd[1627]: time="2026-01-24T03:10:31.778551804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 03:10:31.778725 containerd[1627]: time="2026-01-24T03:10:31.778654361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 03:10:31.779175 kubelet[2868]: E0124 03:10:31.778895 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 03:10:31.779175 kubelet[2868]: E0124 03:10:31.778988 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 03:10:31.780184 kubelet[2868]: E0124 03:10:31.779338 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5cht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64cd87fbdf-87r2w_calico-system(7b9a31a8-5cc7-4ee4-9145-620e764b84d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:31.780785 kubelet[2868]: E0124 03:10:31.780736 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:10:31.781303 containerd[1627]: time="2026-01-24T03:10:31.781271666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 03:10:32.107366 containerd[1627]: time="2026-01-24T03:10:32.107126812Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:32.108528 containerd[1627]: time="2026-01-24T03:10:32.108441222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 03:10:32.108670 containerd[1627]: time="2026-01-24T03:10:32.108484135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 03:10:32.108890 kubelet[2868]: E0124 03:10:32.108835 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 03:10:32.109002 kubelet[2868]: E0124 03:10:32.108907 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 03:10:32.109136 kubelet[2868]: E0124 03:10:32.109075 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfw7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7rk5p_calico-system(c3d4cc92-f20f-4793-8073-7a8fb294fc7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:32.110860 kubelet[2868]: E0124 03:10:32.110784 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:10:34.175944 systemd[1]: Started sshd@12-10.244.26.234:22-20.161.92.111:48168.service - OpenSSH per-connection server daemon (20.161.92.111:48168). Jan 24 03:10:34.748896 sshd[5655]: Accepted publickey for core from 20.161.92.111 port 48168 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:34.751406 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:34.760313 systemd-logind[1597]: New session 15 of user core. Jan 24 03:10:34.766117 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 03:10:35.252614 sshd[5655]: pam_unix(sshd:session): session closed for user core Jan 24 03:10:35.258490 systemd[1]: sshd@12-10.244.26.234:22-20.161.92.111:48168.service: Deactivated successfully. Jan 24 03:10:35.264992 systemd-logind[1597]: Session 15 logged out. Waiting for processes to exit. Jan 24 03:10:35.266495 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 03:10:35.269052 systemd-logind[1597]: Removed session 15. Jan 24 03:10:35.353118 systemd[1]: Started sshd@13-10.244.26.234:22-20.161.92.111:48172.service - OpenSSH per-connection server daemon (20.161.92.111:48172). Jan 24 03:10:35.917418 sshd[5671]: Accepted publickey for core from 20.161.92.111 port 48172 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:35.919659 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:35.926171 systemd-logind[1597]: New session 16 of user core. Jan 24 03:10:35.931130 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 03:10:36.720091 sshd[5671]: pam_unix(sshd:session): session closed for user core Jan 24 03:10:36.727537 systemd[1]: sshd@13-10.244.26.234:22-20.161.92.111:48172.service: Deactivated successfully. Jan 24 03:10:36.728164 systemd-logind[1597]: Session 16 logged out. Waiting for processes to exit. Jan 24 03:10:36.733416 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 03:10:36.735183 systemd-logind[1597]: Removed session 16. Jan 24 03:10:36.817952 systemd[1]: Started sshd@14-10.244.26.234:22-20.161.92.111:48188.service - OpenSSH per-connection server daemon (20.161.92.111:48188). Jan 24 03:10:37.425456 sshd[5683]: Accepted publickey for core from 20.161.92.111 port 48188 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:37.428386 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:37.437421 systemd-logind[1597]: New session 17 of user core. Jan 24 03:10:37.443050 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 03:10:37.956062 sshd[5683]: pam_unix(sshd:session): session closed for user core Jan 24 03:10:37.960805 systemd[1]: sshd@14-10.244.26.234:22-20.161.92.111:48188.service: Deactivated successfully. Jan 24 03:10:37.966016 systemd-logind[1597]: Session 17 logged out. Waiting for processes to exit. Jan 24 03:10:37.966957 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 03:10:37.969366 systemd-logind[1597]: Removed session 17. Jan 24 03:10:38.125349 containerd[1627]: time="2026-01-24T03:10:38.125284123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 03:10:38.444263 containerd[1627]: time="2026-01-24T03:10:38.444145954Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:10:38.445910 containerd[1627]: time="2026-01-24T03:10:38.445862190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 03:10:38.446061 containerd[1627]: time="2026-01-24T03:10:38.445929071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 03:10:38.446789 kubelet[2868]: E0124 03:10:38.446330 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 03:10:38.446789 kubelet[2868]: E0124 03:10:38.446422 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 03:10:38.446789 kubelet[2868]: E0124 03:10:38.446693 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2544f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-54qqp_calico-system(b46b6c51-14b1-4c45-8faa-d27677477dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 03:10:38.449231 kubelet[2868]: E0124 03:10:38.448232 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:10:41.125334 kubelet[2868]: E0124 03:10:41.125266 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:10:41.126218 kubelet[2868]: E0124 03:10:41.125383 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:10:43.056267 systemd[1]: Started sshd@15-10.244.26.234:22-20.161.92.111:33580.service - OpenSSH per-connection server daemon (20.161.92.111:33580). Jan 24 03:10:43.124815 kubelet[2868]: E0124 03:10:43.124745 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:10:43.655586 sshd[5703]: Accepted publickey for core from 20.161.92.111 port 33580 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:43.658856 sshd[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:43.667023 systemd-logind[1597]: New session 18 of user core. Jan 24 03:10:43.673170 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 03:10:44.127672 kubelet[2868]: E0124 03:10:44.127314 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775d9ff4d9-p47mr" podUID="d78211da-ca25-4f3e-be35-f78b1336c756" Jan 24 03:10:44.162278 sshd[5703]: pam_unix(sshd:session): session closed for user core Jan 24 03:10:44.169002 systemd-logind[1597]: Session 18 logged out. Waiting for processes to exit. Jan 24 03:10:44.170219 systemd[1]: sshd@15-10.244.26.234:22-20.161.92.111:33580.service: Deactivated successfully. Jan 24 03:10:44.177781 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 03:10:44.180282 systemd-logind[1597]: Removed session 18. Jan 24 03:10:45.126031 kubelet[2868]: E0124 03:10:45.125434 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:10:49.266020 systemd[1]: Started sshd@16-10.244.26.234:22-20.161.92.111:33586.service - OpenSSH per-connection server daemon (20.161.92.111:33586). Jan 24 03:10:49.854182 sshd[5739]: Accepted publickey for core from 20.161.92.111 port 33586 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:49.857375 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:49.864839 systemd-logind[1597]: New session 19 of user core. Jan 24 03:10:49.872040 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 03:10:50.128313 kubelet[2868]: E0124 03:10:50.127320 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:10:50.417476 sshd[5739]: pam_unix(sshd:session): session closed for user core Jan 24 03:10:50.428771 systemd[1]: sshd@16-10.244.26.234:22-20.161.92.111:33586.service: Deactivated successfully. Jan 24 03:10:50.435001 systemd-logind[1597]: Session 19 logged out. Waiting for processes to exit. Jan 24 03:10:50.435902 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 03:10:50.439114 systemd-logind[1597]: Removed session 19. Jan 24 03:10:53.126679 kubelet[2868]: E0124 03:10:53.125362 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:10:54.126673 kubelet[2868]: E0124 03:10:54.126586 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:10:55.516072 systemd[1]: Started sshd@17-10.244.26.234:22-20.161.92.111:36136.service - OpenSSH per-connection server daemon (20.161.92.111:36136). Jan 24 03:10:56.082296 sshd[5753]: Accepted publickey for core from 20.161.92.111 port 36136 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:56.084745 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:56.092619 systemd-logind[1597]: New session 20 of user core. Jan 24 03:10:56.098129 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 03:10:56.127166 kubelet[2868]: E0124 03:10:56.126820 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:10:56.579185 sshd[5753]: pam_unix(sshd:session): session closed for user core Jan 24 03:10:56.586433 systemd-logind[1597]: Session 20 logged out. Waiting for processes to exit. Jan 24 03:10:56.587194 systemd[1]: sshd@17-10.244.26.234:22-20.161.92.111:36136.service: Deactivated successfully. Jan 24 03:10:56.592456 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 03:10:56.594489 systemd-logind[1597]: Removed session 20. Jan 24 03:10:56.680129 systemd[1]: Started sshd@18-10.244.26.234:22-20.161.92.111:36140.service - OpenSSH per-connection server daemon (20.161.92.111:36140). Jan 24 03:10:57.128464 kubelet[2868]: E0124 03:10:57.128342 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775d9ff4d9-p47mr" podUID="d78211da-ca25-4f3e-be35-f78b1336c756" Jan 24 03:10:57.246700 sshd[5769]: Accepted publickey for core from 20.161.92.111 port 36140 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:57.249196 sshd[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:57.256569 systemd-logind[1597]: New session 21 of user core. Jan 24 03:10:57.264320 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 03:10:58.127582 kubelet[2868]: E0124 03:10:58.127010 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:10:58.134332 sshd[5769]: pam_unix(sshd:session): session closed for user core Jan 24 03:10:58.147580 systemd[1]: sshd@18-10.244.26.234:22-20.161.92.111:36140.service: Deactivated successfully. Jan 24 03:10:58.162048 systemd-logind[1597]: Session 21 logged out. Waiting for processes to exit. Jan 24 03:10:58.162707 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 03:10:58.165699 systemd-logind[1597]: Removed session 21. Jan 24 03:10:58.230007 systemd[1]: Started sshd@19-10.244.26.234:22-20.161.92.111:36156.service - OpenSSH per-connection server daemon (20.161.92.111:36156). Jan 24 03:10:58.831307 sshd[5782]: Accepted publickey for core from 20.161.92.111 port 36156 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:10:58.834685 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:10:58.844847 systemd-logind[1597]: New session 22 of user core. Jan 24 03:10:58.851114 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 03:11:00.183824 sshd[5782]: pam_unix(sshd:session): session closed for user core Jan 24 03:11:00.194892 systemd-logind[1597]: Session 22 logged out. Waiting for processes to exit. Jan 24 03:11:00.195388 systemd[1]: sshd@19-10.244.26.234:22-20.161.92.111:36156.service: Deactivated successfully. Jan 24 03:11:00.202179 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 03:11:00.204668 systemd-logind[1597]: Removed session 22. Jan 24 03:11:00.271829 systemd[1]: Started sshd@20-10.244.26.234:22-20.161.92.111:36158.service - OpenSSH per-connection server daemon (20.161.92.111:36158). Jan 24 03:11:00.872647 sshd[5801]: Accepted publickey for core from 20.161.92.111 port 36158 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:11:00.875186 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:11:00.883689 systemd-logind[1597]: New session 23 of user core. Jan 24 03:11:00.893409 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 03:11:01.734027 sshd[5801]: pam_unix(sshd:session): session closed for user core Jan 24 03:11:01.738948 systemd-logind[1597]: Session 23 logged out. Waiting for processes to exit. Jan 24 03:11:01.739799 systemd[1]: sshd@20-10.244.26.234:22-20.161.92.111:36158.service: Deactivated successfully. Jan 24 03:11:01.746347 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 03:11:01.748170 systemd-logind[1597]: Removed session 23. Jan 24 03:11:01.855076 systemd[1]: Started sshd@21-10.244.26.234:22-20.161.92.111:36162.service - OpenSSH per-connection server daemon (20.161.92.111:36162). Jan 24 03:11:02.435335 sshd[5813]: Accepted publickey for core from 20.161.92.111 port 36162 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:11:02.437801 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:11:02.444743 systemd-logind[1597]: New session 24 of user core. Jan 24 03:11:02.451168 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 03:11:02.972336 sshd[5813]: pam_unix(sshd:session): session closed for user core Jan 24 03:11:02.979673 systemd[1]: sshd@21-10.244.26.234:22-20.161.92.111:36162.service: Deactivated successfully. Jan 24 03:11:02.983484 systemd-logind[1597]: Session 24 logged out. Waiting for processes to exit. Jan 24 03:11:02.984194 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 03:11:02.987399 systemd-logind[1597]: Removed session 24. Jan 24 03:11:04.124970 kubelet[2868]: E0124 03:11:04.124691 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:11:05.125731 kubelet[2868]: E0124 03:11:05.125422 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:11:05.125731 kubelet[2868]: E0124 03:11:05.125434 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:11:08.069065 systemd[1]: Started sshd@22-10.244.26.234:22-20.161.92.111:42136.service - OpenSSH per-connection server daemon (20.161.92.111:42136). Jan 24 03:11:08.683655 sshd[5829]: Accepted publickey for core from 20.161.92.111 port 42136 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:11:08.686270 sshd[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:11:08.696859 systemd-logind[1597]: New session 25 of user core. Jan 24 03:11:08.706038 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 03:11:09.195104 sshd[5829]: pam_unix(sshd:session): session closed for user core Jan 24 03:11:09.200070 systemd[1]: sshd@22-10.244.26.234:22-20.161.92.111:42136.service: Deactivated successfully. Jan 24 03:11:09.205572 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 03:11:09.205658 systemd-logind[1597]: Session 25 logged out. Waiting for processes to exit. Jan 24 03:11:09.208537 systemd-logind[1597]: Removed session 25. Jan 24 03:11:10.128204 kubelet[2868]: E0124 03:11:10.126845 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f" Jan 24 03:11:10.128204 kubelet[2868]: E0124 03:11:10.127204 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:11:12.127186 containerd[1627]: time="2026-01-24T03:11:12.125356119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 03:11:12.448376 containerd[1627]: time="2026-01-24T03:11:12.448264571Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:11:12.450915 containerd[1627]: time="2026-01-24T03:11:12.450829168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 03:11:12.451079 containerd[1627]: time="2026-01-24T03:11:12.450872783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 03:11:12.451789 kubelet[2868]: E0124 03:11:12.451310 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 03:11:12.451789 kubelet[2868]: E0124 03:11:12.451495 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 03:11:12.453946 kubelet[2868]: E0124 03:11:12.453840 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:59dbfedf34134529b60f39f05b808eb2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775d9ff4d9-p47mr_calico-system(d78211da-ca25-4f3e-be35-f78b1336c756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 03:11:12.456644 containerd[1627]: time="2026-01-24T03:11:12.456474409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 03:11:12.779405 containerd[1627]: time="2026-01-24T03:11:12.779209783Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:11:12.780732 containerd[1627]: time="2026-01-24T03:11:12.780678759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 03:11:12.780898 containerd[1627]: time="2026-01-24T03:11:12.780712885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 03:11:12.781070 kubelet[2868]: E0124 03:11:12.780999 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 03:11:12.781165 kubelet[2868]: E0124 03:11:12.781089 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 03:11:12.781308 kubelet[2868]: E0124 03:11:12.781248 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775d9ff4d9-p47mr_calico-system(d78211da-ca25-4f3e-be35-f78b1336c756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 03:11:12.783069 kubelet[2868]: E0124 03:11:12.782922 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775d9ff4d9-p47mr" podUID="d78211da-ca25-4f3e-be35-f78b1336c756" Jan 24 03:11:14.293994 systemd[1]: Started sshd@23-10.244.26.234:22-20.161.92.111:51790.service - OpenSSH per-connection server daemon (20.161.92.111:51790). Jan 24 03:11:14.877591 sshd[5851]: Accepted publickey for core from 20.161.92.111 port 51790 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:11:14.880318 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:11:14.887691 systemd-logind[1597]: New session 26 of user core. Jan 24 03:11:14.893099 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 03:11:15.383526 sshd[5851]: pam_unix(sshd:session): session closed for user core Jan 24 03:11:15.387975 systemd[1]: sshd@23-10.244.26.234:22-20.161.92.111:51790.service: Deactivated successfully. Jan 24 03:11:15.393962 systemd-logind[1597]: Session 26 logged out. Waiting for processes to exit. Jan 24 03:11:15.395331 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 03:11:15.397768 systemd-logind[1597]: Removed session 26. Jan 24 03:11:16.129968 containerd[1627]: time="2026-01-24T03:11:16.129525361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 03:11:16.445593 containerd[1627]: time="2026-01-24T03:11:16.445242191Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:11:16.446882 containerd[1627]: time="2026-01-24T03:11:16.446690749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 03:11:16.446882 containerd[1627]: time="2026-01-24T03:11:16.446715204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 03:11:16.448242 kubelet[2868]: E0124 03:11:16.447234 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:11:16.448242 kubelet[2868]: E0124 03:11:16.447355 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:11:16.448242 kubelet[2868]: E0124 03:11:16.447587 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxfv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569dd98ffb-4br8n_calico-apiserver(00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 03:11:16.449444 kubelet[2868]: E0124 03:11:16.449333 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-4br8n" podUID="00c68d5e-73a9-45ef-9b1c-7cb0bd0c3c8c" Jan 24 03:11:18.126134 containerd[1627]: time="2026-01-24T03:11:18.126074693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 03:11:18.127194 kubelet[2868]: E0124 03:11:18.127093 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-54qqp" podUID="b46b6c51-14b1-4c45-8faa-d27677477dc3" Jan 24 03:11:18.446578 containerd[1627]: time="2026-01-24T03:11:18.445813920Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:11:18.448448 containerd[1627]: time="2026-01-24T03:11:18.448291054Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 03:11:18.448448 containerd[1627]: time="2026-01-24T03:11:18.448320848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 03:11:18.450863 kubelet[2868]: E0124 03:11:18.448964 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:11:18.450863 kubelet[2868]: E0124 03:11:18.450718 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 03:11:18.451371 kubelet[2868]: E0124 03:11:18.451234 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5kqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-569dd98ffb-zpcp9_calico-apiserver(76ab4499-021b-4baa-941b-8b5ea5143e46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 03:11:18.452519 kubelet[2868]: E0124 03:11:18.452463 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-569dd98ffb-zpcp9" podUID="76ab4499-021b-4baa-941b-8b5ea5143e46" Jan 24 03:11:20.487996 systemd[1]: Started sshd@24-10.244.26.234:22-20.161.92.111:51804.service - OpenSSH per-connection server daemon (20.161.92.111:51804). Jan 24 03:11:21.092219 sshd[5887]: Accepted publickey for core from 20.161.92.111 port 51804 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:11:21.095290 sshd[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:11:21.106934 systemd-logind[1597]: New session 27 of user core. Jan 24 03:11:21.116791 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 03:11:21.799934 sshd[5887]: pam_unix(sshd:session): session closed for user core Jan 24 03:11:21.808757 systemd-logind[1597]: Session 27 logged out. Waiting for processes to exit. Jan 24 03:11:21.811070 systemd[1]: sshd@24-10.244.26.234:22-20.161.92.111:51804.service: Deactivated successfully. Jan 24 03:11:21.819291 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 03:11:21.827332 systemd-logind[1597]: Removed session 27. Jan 24 03:11:23.125762 containerd[1627]: time="2026-01-24T03:11:23.125391445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 03:11:23.449563 containerd[1627]: time="2026-01-24T03:11:23.449145087Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:11:23.453731 containerd[1627]: time="2026-01-24T03:11:23.453392413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 03:11:23.453731 containerd[1627]: time="2026-01-24T03:11:23.453657416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 03:11:23.456736 kubelet[2868]: E0124 03:11:23.456210 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 03:11:23.456736 kubelet[2868]: E0124 03:11:23.456541 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 03:11:23.458707 kubelet[2868]: E0124 03:11:23.457909 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5cht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64cd87fbdf-87r2w_calico-system(7b9a31a8-5cc7-4ee4-9145-620e764b84d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 03:11:23.458962 containerd[1627]: time="2026-01-24T03:11:23.458169593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 03:11:23.459476 kubelet[2868]: E0124 03:11:23.459169 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64cd87fbdf-87r2w" podUID="7b9a31a8-5cc7-4ee4-9145-620e764b84d5" Jan 24 03:11:23.800262 containerd[1627]: time="2026-01-24T03:11:23.800027341Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:11:23.801988 containerd[1627]: time="2026-01-24T03:11:23.801376237Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 03:11:23.801988 containerd[1627]: time="2026-01-24T03:11:23.801469610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 03:11:23.802546 kubelet[2868]: E0124 03:11:23.802272 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 03:11:23.802546 kubelet[2868]: E0124 03:11:23.802365 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 03:11:23.802849 kubelet[2868]: E0124 03:11:23.802571 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfw7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7rk5p_calico-system(c3d4cc92-f20f-4793-8073-7a8fb294fc7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 03:11:23.805794 containerd[1627]: time="2026-01-24T03:11:23.805752233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 03:11:24.119820 containerd[1627]: time="2026-01-24T03:11:24.118970380Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 03:11:24.121397 containerd[1627]: time="2026-01-24T03:11:24.121247784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 03:11:24.121397 containerd[1627]: time="2026-01-24T03:11:24.121311762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 03:11:24.128993 kubelet[2868]: E0124 03:11:24.128815 2868 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 03:11:24.128993 kubelet[2868]: E0124 03:11:24.128945 2868 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 03:11:24.130684 kubelet[2868]: E0124 03:11:24.130554 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775d9ff4d9-p47mr" podUID="d78211da-ca25-4f3e-be35-f78b1336c756" Jan 24 03:11:24.151638 kubelet[2868]: E0124 03:11:24.149851 2868 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfw7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7rk5p_calico-system(c3d4cc92-f20f-4793-8073-7a8fb294fc7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 03:11:24.152220 kubelet[2868]: E0124 03:11:24.152121 2868 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7rk5p" podUID="c3d4cc92-f20f-4793-8073-7a8fb294fc7f"