Jan 17 01:17:47.009013 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 01:17:47.009048 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 01:17:47.009062 kernel: BIOS-provided physical RAM map: Jan 17 01:17:47.009078 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 01:17:47.009088 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 01:17:47.009098 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 01:17:47.009110 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 17 01:17:47.009120 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 17 01:17:47.009130 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 01:17:47.009141 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 01:17:47.009151 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 01:17:47.009161 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 01:17:47.009177 kernel: NX (Execute Disable) protection: active Jan 17 01:17:47.009188 kernel: APIC: Static calls initialized Jan 17 01:17:47.009200 kernel: SMBIOS 2.8 present. Jan 17 01:17:47.009212 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 17 01:17:47.009224 kernel: Hypervisor detected: KVM Jan 17 01:17:47.009240 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 01:17:47.009251 kernel: kvm-clock: using sched offset of 4345336967 cycles Jan 17 01:17:47.009287 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 01:17:47.009303 kernel: tsc: Detected 2499.998 MHz processor Jan 17 01:17:47.009314 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 01:17:47.009326 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 01:17:47.009338 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 17 01:17:47.009350 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 01:17:47.009361 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 01:17:47.009379 kernel: Using GB pages for direct mapping Jan 17 01:17:47.009391 kernel: ACPI: Early table checksum verification disabled Jan 17 01:17:47.009403 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 17 01:17:47.009414 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:17:47.009426 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:17:47.009437 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:17:47.009449 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 17 01:17:47.009460 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:17:47.009472 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:17:47.009488 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:17:47.009500 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:17:47.009511 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 17 01:17:47.009523 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 17 01:17:47.009535 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 17 01:17:47.009552 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 17 01:17:47.009565 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 17 01:17:47.009581 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 17 01:17:47.009593 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 17 01:17:47.009605 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 01:17:47.009617 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 01:17:47.009629 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 17 01:17:47.009641 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 17 01:17:47.009665 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 17 01:17:47.009683 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 17 01:17:47.009696 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 17 01:17:47.009707 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 17 01:17:47.009719 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 17 01:17:47.009731 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 17 01:17:47.009743 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 17 01:17:47.009755 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 17 01:17:47.009767 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 17 01:17:47.009778 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 17 01:17:47.009790 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 17 01:17:47.009807 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 17 01:17:47.009819 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 01:17:47.009831 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 01:17:47.009843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 17 01:17:47.009855 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 17 01:17:47.009868 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 17 01:17:47.009880 kernel: Zone ranges: Jan 17 01:17:47.009892 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 01:17:47.009904 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 17 01:17:47.009921 kernel: Normal empty Jan 17 01:17:47.009933 kernel: Movable zone start for each node Jan 17 01:17:47.009945 kernel: Early memory node ranges Jan 17 01:17:47.009957 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 01:17:47.009969 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 17 01:17:47.009981 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 17 01:17:47.009993 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 01:17:47.010005 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 01:17:47.010017 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 17 01:17:47.010029 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 01:17:47.010046 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 01:17:47.010058 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 01:17:47.010070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 01:17:47.010082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 01:17:47.010094 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 01:17:47.010106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 01:17:47.010118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 01:17:47.010130 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 01:17:47.010142 kernel: TSC deadline timer available Jan 17 01:17:47.010159 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 17 01:17:47.010171 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 01:17:47.010183 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 01:17:47.010195 kernel: Booting paravirtualized kernel on KVM Jan 17 01:17:47.010208 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 01:17:47.010220 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 17 01:17:47.010232 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 17 01:17:47.010244 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 17 01:17:47.010256 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 17 01:17:47.012308 kernel: kvm-guest: PV spinlocks enabled Jan 17 01:17:47.012342 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 01:17:47.012358 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 01:17:47.012372 kernel: random: crng init done Jan 17 01:17:47.012384 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 01:17:47.012396 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 01:17:47.012409 kernel: Fallback order for Node 0: 0 Jan 17 01:17:47.012421 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 17 01:17:47.012442 kernel: Policy zone: DMA32 Jan 17 01:17:47.012454 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 01:17:47.012466 kernel: software IO TLB: area num 16. Jan 17 01:17:47.012479 kernel: Memory: 1901588K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194768K reserved, 0K cma-reserved) Jan 17 01:17:47.012491 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 17 01:17:47.012503 kernel: Kernel/User page tables isolation: enabled Jan 17 01:17:47.012515 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 01:17:47.012528 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 01:17:47.012539 kernel: Dynamic Preempt: voluntary Jan 17 01:17:47.012557 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 01:17:47.012570 kernel: rcu: RCU event tracing is enabled. Jan 17 01:17:47.012583 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 17 01:17:47.012595 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 01:17:47.012608 kernel: Rude variant of Tasks RCU enabled. Jan 17 01:17:47.012634 kernel: Tracing variant of Tasks RCU enabled. Jan 17 01:17:47.012647 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 01:17:47.012673 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 17 01:17:47.012686 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 17 01:17:47.012699 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 01:17:47.012711 kernel: Console: colour VGA+ 80x25 Jan 17 01:17:47.012724 kernel: printk: console [tty0] enabled Jan 17 01:17:47.012742 kernel: printk: console [ttyS0] enabled Jan 17 01:17:47.012755 kernel: ACPI: Core revision 20230628 Jan 17 01:17:47.012768 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 01:17:47.012781 kernel: x2apic enabled Jan 17 01:17:47.012793 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 01:17:47.012811 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 17 01:17:47.012824 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 17 01:17:47.012837 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 01:17:47.012850 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 01:17:47.012863 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 01:17:47.012875 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 01:17:47.012888 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 01:17:47.012900 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 01:17:47.012913 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 01:17:47.012925 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 01:17:47.012943 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 01:17:47.012956 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 01:17:47.012968 kernel: MMIO Stale Data: Unknown: No mitigations Jan 17 01:17:47.012981 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 17 01:17:47.012993 kernel: active return thunk: its_return_thunk Jan 17 01:17:47.013006 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 01:17:47.013019 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 01:17:47.013031 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 01:17:47.013044 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 01:17:47.013056 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 01:17:47.013074 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 01:17:47.013087 kernel: Freeing SMP alternatives memory: 32K Jan 17 01:17:47.013099 kernel: pid_max: default: 32768 minimum: 301 Jan 17 01:17:47.013112 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 01:17:47.013125 kernel: landlock: Up and running. Jan 17 01:17:47.013137 kernel: SELinux: Initializing. Jan 17 01:17:47.013150 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 01:17:47.013163 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 01:17:47.013175 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 17 01:17:47.013188 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 01:17:47.013201 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 01:17:47.013218 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 01:17:47.013232 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 17 01:17:47.013245 kernel: signal: max sigframe size: 1776 Jan 17 01:17:47.013258 kernel: rcu: Hierarchical SRCU implementation. Jan 17 01:17:47.013295 kernel: rcu: Max phase no-delay instances is 400. Jan 17 01:17:47.013310 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 01:17:47.013323 kernel: smp: Bringing up secondary CPUs ... Jan 17 01:17:47.013335 kernel: smpboot: x86: Booting SMP configuration: Jan 17 01:17:47.013348 kernel: .... node #0, CPUs: #1 Jan 17 01:17:47.013367 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 17 01:17:47.013381 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 01:17:47.013393 kernel: smpboot: Max logical packages: 16 Jan 17 01:17:47.013406 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 17 01:17:47.013419 kernel: devtmpfs: initialized Jan 17 01:17:47.013432 kernel: x86/mm: Memory block size: 128MB Jan 17 01:17:47.013445 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 01:17:47.013458 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 17 01:17:47.013470 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 01:17:47.013488 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 01:17:47.013501 kernel: audit: initializing netlink subsys (disabled) Jan 17 01:17:47.013514 kernel: audit: type=2000 audit(1768612665.044:1): state=initialized audit_enabled=0 res=1 Jan 17 01:17:47.013526 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 01:17:47.013539 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 01:17:47.013552 kernel: cpuidle: using governor menu Jan 17 01:17:47.013565 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 01:17:47.013577 kernel: dca service started, version 1.12.1 Jan 17 01:17:47.013590 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 01:17:47.013607 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 01:17:47.013620 kernel: PCI: Using configuration type 1 for base access Jan 17 01:17:47.013633 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 01:17:47.013646 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 01:17:47.013669 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 01:17:47.013682 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 01:17:47.013695 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 01:17:47.013707 kernel: ACPI: Added _OSI(Module Device) Jan 17 01:17:47.013720 kernel: ACPI: Added _OSI(Processor Device) Jan 17 01:17:47.013738 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 01:17:47.013752 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 01:17:47.013764 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 01:17:47.013777 kernel: ACPI: Interpreter enabled Jan 17 01:17:47.013790 kernel: ACPI: PM: (supports S0 S5) Jan 17 01:17:47.013802 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 01:17:47.013815 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 01:17:47.013828 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 01:17:47.013841 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 01:17:47.013858 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 01:17:47.014142 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 01:17:47.016379 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 01:17:47.016560 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 01:17:47.016581 kernel: PCI host bridge to bus 0000:00 Jan 17 01:17:47.016784 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 01:17:47.016941 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 01:17:47.017107 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 01:17:47.017259 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 17 01:17:47.017447 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 01:17:47.017600 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 17 01:17:47.017767 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 01:17:47.017974 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 01:17:47.018178 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 17 01:17:47.018377 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 17 01:17:47.018547 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 17 01:17:47.018730 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 17 01:17:47.018900 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 01:17:47.019089 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 01:17:47.021337 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 17 01:17:47.021566 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 01:17:47.021760 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 17 01:17:47.021952 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 01:17:47.022124 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 17 01:17:47.024358 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 01:17:47.024547 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 17 01:17:47.024759 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 01:17:47.024936 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 17 01:17:47.025128 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 01:17:47.025336 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 17 01:17:47.025514 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 01:17:47.025696 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 17 01:17:47.025882 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 01:17:47.026048 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 17 01:17:47.026223 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 01:17:47.027468 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 17 01:17:47.027646 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 17 01:17:47.027833 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 17 01:17:47.028006 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 17 01:17:47.028196 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 17 01:17:47.029451 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 01:17:47.029631 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 17 01:17:47.029820 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 17 01:17:47.030001 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 01:17:47.030169 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 01:17:47.030362 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 01:17:47.030540 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 17 01:17:47.030723 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 17 01:17:47.030899 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 01:17:47.031067 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 01:17:47.031258 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 17 01:17:47.034518 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 17 01:17:47.034723 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 01:17:47.034894 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 01:17:47.035060 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:17:47.035245 kernel: pci_bus 0000:02: extended config space not accessible Jan 17 01:17:47.035493 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 17 01:17:47.036379 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 17 01:17:47.036564 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 01:17:47.036751 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 01:17:47.036935 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 01:17:47.037108 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 17 01:17:47.038308 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 01:17:47.038521 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 01:17:47.038702 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 01:17:47.038885 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 01:17:47.039069 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 17 01:17:47.039237 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 01:17:47.039422 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 01:17:47.039589 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 01:17:47.039773 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 01:17:47.039940 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 01:17:47.040105 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 01:17:47.042394 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 01:17:47.042571 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 01:17:47.042756 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 01:17:47.042935 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 01:17:47.043104 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 01:17:47.044306 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 01:17:47.044485 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 01:17:47.044660 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 01:17:47.044865 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 01:17:47.045038 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 01:17:47.045202 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 01:17:47.046415 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 01:17:47.046436 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 01:17:47.046450 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 01:17:47.046463 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 01:17:47.046476 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 01:17:47.046489 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 01:17:47.046510 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 01:17:47.046523 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 01:17:47.046536 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 01:17:47.046549 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 01:17:47.046562 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 01:17:47.046575 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 01:17:47.046588 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 01:17:47.046601 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 01:17:47.046614 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 01:17:47.046631 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 01:17:47.046645 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 01:17:47.046670 kernel: iommu: Default domain type: Translated Jan 17 01:17:47.046683 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 01:17:47.046696 kernel: PCI: Using ACPI for IRQ routing Jan 17 01:17:47.046709 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 01:17:47.046722 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 01:17:47.046735 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 17 01:17:47.046898 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 01:17:47.047071 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 01:17:47.047234 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 01:17:47.047254 kernel: vgaarb: loaded Jan 17 01:17:47.049294 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 01:17:47.049310 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 01:17:47.049323 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 01:17:47.049336 kernel: pnp: PnP ACPI init Jan 17 01:17:47.049516 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 01:17:47.049546 kernel: pnp: PnP ACPI: found 5 devices Jan 17 01:17:47.049559 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 01:17:47.049572 kernel: NET: Registered PF_INET protocol family Jan 17 01:17:47.049586 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 01:17:47.049599 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 01:17:47.049612 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 01:17:47.049625 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 01:17:47.049638 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 01:17:47.049663 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 01:17:47.049683 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 01:17:47.049697 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 01:17:47.049709 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 01:17:47.049722 kernel: NET: Registered PF_XDP protocol family Jan 17 01:17:47.049889 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 17 01:17:47.050057 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 17 01:17:47.050223 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 17 01:17:47.050425 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 17 01:17:47.050595 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 01:17:47.050780 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 01:17:47.050947 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 01:17:47.051114 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 01:17:47.051317 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 01:17:47.051493 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 01:17:47.051666 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 01:17:47.051832 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 17 01:17:47.051994 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 17 01:17:47.052160 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 17 01:17:47.054362 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 17 01:17:47.054531 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 17 01:17:47.054718 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 01:17:47.054919 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 01:17:47.055088 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 01:17:47.055256 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 17 01:17:47.057456 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 01:17:47.057621 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:17:47.057799 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 01:17:47.057963 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 17 01:17:47.058149 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 01:17:47.058336 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 01:17:47.058513 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 01:17:47.058690 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 17 01:17:47.058858 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 01:17:47.059032 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 01:17:47.059208 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 01:17:47.059405 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 17 01:17:47.059578 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 01:17:47.059757 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 01:17:47.059922 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 01:17:47.060085 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 17 01:17:47.062290 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 01:17:47.062474 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 01:17:47.062639 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 01:17:47.062818 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 17 01:17:47.062981 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 01:17:47.063154 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 01:17:47.065356 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 01:17:47.065526 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 17 01:17:47.065705 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 01:17:47.065872 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 01:17:47.066053 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 01:17:47.066225 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 17 01:17:47.066419 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 01:17:47.066584 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 01:17:47.066756 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 01:17:47.066909 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 01:17:47.067060 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 01:17:47.067210 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 17 01:17:47.069403 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 01:17:47.069560 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 17 01:17:47.069756 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 17 01:17:47.069921 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 17 01:17:47.070102 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:17:47.070310 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 17 01:17:47.070495 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 17 01:17:47.070688 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 17 01:17:47.070919 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 01:17:47.071207 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 17 01:17:47.072485 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 17 01:17:47.072665 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 01:17:47.072833 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 17 01:17:47.072988 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 17 01:17:47.073162 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 01:17:47.074434 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 17 01:17:47.074596 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 17 01:17:47.074771 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 01:17:47.074940 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 17 01:17:47.075097 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 17 01:17:47.075251 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 01:17:47.075455 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 17 01:17:47.075611 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 17 01:17:47.075913 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 01:17:47.076082 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 17 01:17:47.076241 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 17 01:17:47.076437 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 01:17:47.076470 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 01:17:47.076485 kernel: PCI: CLS 0 bytes, default 64 Jan 17 01:17:47.076499 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 01:17:47.076513 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 17 01:17:47.076527 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 01:17:47.076541 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 17 01:17:47.076555 kernel: Initialise system trusted keyrings Jan 17 01:17:47.076569 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 01:17:47.076582 kernel: Key type asymmetric registered Jan 17 01:17:47.076601 kernel: Asymmetric key parser 'x509' registered Jan 17 01:17:47.076614 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 01:17:47.076628 kernel: io scheduler mq-deadline registered Jan 17 01:17:47.076642 kernel: io scheduler kyber registered Jan 17 01:17:47.076670 kernel: io scheduler bfq registered Jan 17 01:17:47.076842 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 01:17:47.077011 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 01:17:47.077178 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:17:47.077388 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 01:17:47.077556 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 01:17:47.077739 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:17:47.077910 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 01:17:47.078080 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 01:17:47.078247 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:17:47.078449 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 01:17:47.078615 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 01:17:47.078811 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:17:47.078979 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 01:17:47.079144 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 01:17:47.079341 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:17:47.079519 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 01:17:47.079743 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 01:17:47.079916 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:17:47.080084 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 01:17:47.080249 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 01:17:47.080467 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:17:47.080643 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 01:17:47.080823 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 01:17:47.080987 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:17:47.081009 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 01:17:47.081024 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 01:17:47.081038 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 01:17:47.081052 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 01:17:47.081072 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 01:17:47.081087 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 01:17:47.081100 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 01:17:47.081114 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 01:17:47.081127 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 01:17:47.081325 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 01:17:47.081486 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 01:17:47.081641 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T01:17:46 UTC (1768612666) Jan 17 01:17:47.081818 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 01:17:47.081839 kernel: intel_pstate: CPU model not supported Jan 17 01:17:47.081853 kernel: NET: Registered PF_INET6 protocol family Jan 17 01:17:47.081866 kernel: Segment Routing with IPv6 Jan 17 01:17:47.081880 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 01:17:47.081893 kernel: NET: Registered PF_PACKET protocol family Jan 17 01:17:47.081907 kernel: Key type dns_resolver registered Jan 17 01:17:47.081920 kernel: IPI shorthand broadcast: enabled Jan 17 01:17:47.081934 kernel: sched_clock: Marking stable (1259003993, 236856231)->(1618859590, -122999366) Jan 17 01:17:47.081955 kernel: registered taskstats version 1 Jan 17 01:17:47.081969 kernel: Loading compiled-in X.509 certificates Jan 17 01:17:47.081983 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 01:17:47.081996 kernel: Key type .fscrypt registered Jan 17 01:17:47.082009 kernel: Key type fscrypt-provisioning registered Jan 17 01:17:47.082023 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 01:17:47.082036 kernel: ima: Allocated hash algorithm: sha1 Jan 17 01:17:47.082049 kernel: ima: No architecture policies found Jan 17 01:17:47.082068 kernel: clk: Disabling unused clocks Jan 17 01:17:47.082086 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 01:17:47.082100 kernel: Write protecting the kernel read-only data: 36864k Jan 17 01:17:47.082114 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 01:17:47.082127 kernel: Run /init as init process Jan 17 01:17:47.082141 kernel: with arguments: Jan 17 01:17:47.082154 kernel: /init Jan 17 01:17:47.082167 kernel: with environment: Jan 17 01:17:47.082180 kernel: HOME=/ Jan 17 01:17:47.082193 kernel: TERM=linux Jan 17 01:17:47.082210 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 01:17:47.082232 systemd[1]: Detected virtualization kvm. Jan 17 01:17:47.082247 systemd[1]: Detected architecture x86-64. Jan 17 01:17:47.082260 systemd[1]: Running in initrd. Jan 17 01:17:47.082318 systemd[1]: No hostname configured, using default hostname. Jan 17 01:17:47.082332 systemd[1]: Hostname set to . Jan 17 01:17:47.082347 systemd[1]: Initializing machine ID from VM UUID. Jan 17 01:17:47.082368 systemd[1]: Queued start job for default target initrd.target. Jan 17 01:17:47.082383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 01:17:47.082397 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 01:17:47.082412 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 01:17:47.082426 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 01:17:47.082441 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 01:17:47.082455 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 01:17:47.082472 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 01:17:47.082492 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 01:17:47.082506 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 01:17:47.082521 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 01:17:47.082535 systemd[1]: Reached target paths.target - Path Units. Jan 17 01:17:47.082550 systemd[1]: Reached target slices.target - Slice Units. Jan 17 01:17:47.082564 systemd[1]: Reached target swap.target - Swaps. Jan 17 01:17:47.082578 systemd[1]: Reached target timers.target - Timer Units. Jan 17 01:17:47.082592 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 01:17:47.082612 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 01:17:47.082627 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 01:17:47.082641 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 01:17:47.082670 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 01:17:47.082685 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 01:17:47.082699 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 01:17:47.082714 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 01:17:47.082728 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 01:17:47.082749 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 01:17:47.082764 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 01:17:47.082778 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 01:17:47.082793 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 01:17:47.082807 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 01:17:47.082821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:17:47.082878 systemd-journald[202]: Collecting audit messages is disabled. Jan 17 01:17:47.082918 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 01:17:47.082933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 01:17:47.082948 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 01:17:47.082968 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 01:17:47.082983 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 01:17:47.082997 kernel: Bridge firewalling registered Jan 17 01:17:47.083012 systemd-journald[202]: Journal started Jan 17 01:17:47.083039 systemd-journald[202]: Runtime Journal (/run/log/journal/4e5255a6ebde4f5abdd5b868cf4acc0c) is 4.7M, max 38.0M, 33.2M free. Jan 17 01:17:47.042008 systemd-modules-load[203]: Inserted module 'overlay' Jan 17 01:17:47.074077 systemd-modules-load[203]: Inserted module 'br_netfilter' Jan 17 01:17:47.132282 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 01:17:47.133566 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 01:17:47.134566 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:17:47.144535 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 01:17:47.147089 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 01:17:47.164323 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 01:17:47.165525 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 01:17:47.171420 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 01:17:47.184641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:17:47.186395 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:17:47.199584 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 01:17:47.201310 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 01:17:47.203370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 01:17:47.214470 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 01:17:47.220953 dracut-cmdline[234]: dracut-dracut-053 Jan 17 01:17:47.225147 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 01:17:47.263233 systemd-resolved[239]: Positive Trust Anchors: Jan 17 01:17:47.263282 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 01:17:47.263333 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 01:17:47.272636 systemd-resolved[239]: Defaulting to hostname 'linux'. Jan 17 01:17:47.274415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 01:17:47.275534 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 01:17:47.332302 kernel: SCSI subsystem initialized Jan 17 01:17:47.344298 kernel: Loading iSCSI transport class v2.0-870. Jan 17 01:17:47.357300 kernel: iscsi: registered transport (tcp) Jan 17 01:17:47.384550 kernel: iscsi: registered transport (qla4xxx) Jan 17 01:17:47.384624 kernel: QLogic iSCSI HBA Driver Jan 17 01:17:47.439845 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 01:17:47.446469 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 01:17:47.480394 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 01:17:47.480507 kernel: device-mapper: uevent: version 1.0.3 Jan 17 01:17:47.484306 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 01:17:47.531311 kernel: raid6: sse2x4 gen() 14097 MB/s Jan 17 01:17:47.549297 kernel: raid6: sse2x2 gen() 9496 MB/s Jan 17 01:17:47.568010 kernel: raid6: sse2x1 gen() 9965 MB/s Jan 17 01:17:47.568074 kernel: raid6: using algorithm sse2x4 gen() 14097 MB/s Jan 17 01:17:47.587302 kernel: raid6: .... xor() 7712 MB/s, rmw enabled Jan 17 01:17:47.587364 kernel: raid6: using ssse3x2 recovery algorithm Jan 17 01:17:47.614357 kernel: xor: automatically using best checksumming function avx Jan 17 01:17:47.810337 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 01:17:47.825712 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 01:17:47.832494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 01:17:47.861908 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jan 17 01:17:47.869169 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 01:17:47.878699 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 01:17:47.901085 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 17 01:17:47.943178 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 01:17:47.949482 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 01:17:48.059675 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 01:17:48.069783 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 01:17:48.096761 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 01:17:48.099722 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 01:17:48.100545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 01:17:48.101698 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 01:17:48.112683 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 01:17:48.131439 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 01:17:48.190160 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 17 01:17:48.205454 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 01:17:48.207287 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 01:17:48.226512 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 01:17:48.226575 kernel: GPT:17805311 != 125829119 Jan 17 01:17:48.226594 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 01:17:48.226611 kernel: GPT:17805311 != 125829119 Jan 17 01:17:48.226639 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 01:17:48.226670 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:17:48.229964 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 01:17:48.243717 kernel: ACPI: bus type USB registered Jan 17 01:17:48.243750 kernel: usbcore: registered new interface driver usbfs Jan 17 01:17:48.243770 kernel: usbcore: registered new interface driver hub Jan 17 01:17:48.243787 kernel: usbcore: registered new device driver usb Jan 17 01:17:48.230148 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:17:48.243481 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 01:17:48.244408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 01:17:48.244650 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:17:48.246070 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:17:48.255600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:17:48.270300 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 01:17:48.273300 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 17 01:17:48.279311 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 01:17:48.283810 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 01:17:48.284050 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 17 01:17:48.285288 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 17 01:17:48.285515 kernel: hub 1-0:1.0: USB hub found Jan 17 01:17:48.285767 kernel: hub 1-0:1.0: 4 ports detected Jan 17 01:17:48.291286 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 01:17:48.294303 kernel: hub 2-0:1.0: USB hub found Jan 17 01:17:48.300289 kernel: hub 2-0:1.0: 4 ports detected Jan 17 01:17:48.313301 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (478) Jan 17 01:17:48.345292 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (472) Jan 17 01:17:48.356011 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 01:17:48.449713 kernel: AVX version of gcm_enc/dec engaged. Jan 17 01:17:48.449759 kernel: AES CTR mode by8 optimization enabled Jan 17 01:17:48.449779 kernel: libata version 3.00 loaded. Jan 17 01:17:48.449796 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 01:17:48.450051 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 01:17:48.450073 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 01:17:48.450291 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 01:17:48.450493 kernel: scsi host0: ahci Jan 17 01:17:48.450743 kernel: scsi host1: ahci Jan 17 01:17:48.450958 kernel: scsi host2: ahci Jan 17 01:17:48.451158 kernel: scsi host3: ahci Jan 17 01:17:48.451391 kernel: scsi host4: ahci Jan 17 01:17:48.451586 kernel: scsi host5: ahci Jan 17 01:17:48.451804 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 17 01:17:48.451825 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 17 01:17:48.451851 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 17 01:17:48.451870 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 17 01:17:48.451888 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 17 01:17:48.451906 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 17 01:17:48.454965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 01:17:48.456160 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:17:48.470621 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 01:17:48.476503 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 01:17:48.477388 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 01:17:48.486496 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 01:17:48.491453 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 01:17:48.494458 disk-uuid[559]: Primary Header is updated. Jan 17 01:17:48.494458 disk-uuid[559]: Secondary Entries is updated. Jan 17 01:17:48.494458 disk-uuid[559]: Secondary Header is updated. Jan 17 01:17:48.503028 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:17:48.507290 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:17:48.524294 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 01:17:48.543714 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:17:48.698733 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 01:17:48.702314 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 01:17:48.706295 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 01:17:48.706333 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 01:17:48.708229 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 01:17:48.711710 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 01:17:48.711747 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 01:17:48.729665 kernel: usbcore: registered new interface driver usbhid Jan 17 01:17:48.729730 kernel: usbhid: USB HID core driver Jan 17 01:17:48.740093 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 17 01:17:48.740164 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 17 01:17:49.516309 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:17:49.516834 disk-uuid[560]: The operation has completed successfully. Jan 17 01:17:49.621983 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 01:17:49.622137 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 01:17:49.634511 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 01:17:49.644666 sh[589]: Success Jan 17 01:17:49.670316 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 17 01:17:49.739482 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 01:17:49.746699 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 01:17:49.753808 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 01:17:49.774461 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 01:17:49.774539 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:17:49.774570 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 01:17:49.776884 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 01:17:49.780113 kernel: BTRFS info (device dm-0): using free space tree Jan 17 01:17:49.790031 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 01:17:49.791653 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 01:17:49.796457 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 01:17:49.802555 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 01:17:49.826219 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:17:49.826312 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:17:49.826334 kernel: BTRFS info (device vda6): using free space tree Jan 17 01:17:49.833293 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 01:17:49.848024 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 01:17:49.848859 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:17:49.857613 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 01:17:49.866508 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 01:17:49.938994 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 01:17:49.947522 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 01:17:49.986803 systemd-networkd[770]: lo: Link UP Jan 17 01:17:49.986819 systemd-networkd[770]: lo: Gained carrier Jan 17 01:17:49.991426 systemd-networkd[770]: Enumeration completed Jan 17 01:17:49.991976 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:17:49.991981 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 01:17:49.993703 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 01:17:49.994715 systemd[1]: Reached target network.target - Network. Jan 17 01:17:49.996693 systemd-networkd[770]: eth0: Link UP Jan 17 01:17:49.996699 systemd-networkd[770]: eth0: Gained carrier Jan 17 01:17:49.996713 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:17:50.023056 ignition[705]: Ignition 2.19.0 Jan 17 01:17:50.023089 ignition[705]: Stage: fetch-offline Jan 17 01:17:50.023173 ignition[705]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:17:50.023193 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:17:50.026431 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 01:17:50.023394 ignition[705]: parsed url from cmdline: "" Jan 17 01:17:50.023400 ignition[705]: no config URL provided Jan 17 01:17:50.023411 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 01:17:50.023428 ignition[705]: no config at "/usr/lib/ignition/user.ign" Jan 17 01:17:50.023437 ignition[705]: failed to fetch config: resource requires networking Jan 17 01:17:50.023745 ignition[705]: Ignition finished successfully Jan 17 01:17:50.034496 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 01:17:50.057146 ignition[777]: Ignition 2.19.0 Jan 17 01:17:50.058217 ignition[777]: Stage: fetch Jan 17 01:17:50.058494 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:17:50.058515 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:17:50.058673 ignition[777]: parsed url from cmdline: "" Jan 17 01:17:50.058680 ignition[777]: no config URL provided Jan 17 01:17:50.058690 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 01:17:50.058705 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 17 01:17:50.058921 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 17 01:17:50.059247 ignition[777]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 17 01:17:50.059318 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 17 01:17:50.059340 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 17 01:17:50.066361 systemd-networkd[770]: eth0: DHCPv4 address 10.244.8.82/30, gateway 10.244.8.81 acquired from 10.244.8.81 Jan 17 01:17:50.259954 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 17 01:17:50.274716 ignition[777]: GET result: OK Jan 17 01:17:50.275347 ignition[777]: parsing config with SHA512: 51c84fa571ddf1eecbee5a220e7d16a8cb70e833fb50c31cef9f079954b55599dbf44863e09706e8677aae63381d1bdd790b8090b74a30993e80a670eba04a7c Jan 17 01:17:50.282429 unknown[777]: fetched base config from "system" Jan 17 01:17:50.284179 unknown[777]: fetched base config from "system" Jan 17 01:17:50.284202 unknown[777]: fetched user config from "openstack" Jan 17 01:17:50.284680 ignition[777]: fetch: fetch complete Jan 17 01:17:50.287112 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 01:17:50.284690 ignition[777]: fetch: fetch passed Jan 17 01:17:50.284758 ignition[777]: Ignition finished successfully Jan 17 01:17:50.301520 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 01:17:50.322607 ignition[785]: Ignition 2.19.0 Jan 17 01:17:50.322629 ignition[785]: Stage: kargs Jan 17 01:17:50.322848 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:17:50.322868 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:17:50.325173 ignition[785]: kargs: kargs passed Jan 17 01:17:50.327957 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 01:17:50.325256 ignition[785]: Ignition finished successfully Jan 17 01:17:50.336504 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 01:17:50.355541 ignition[791]: Ignition 2.19.0 Jan 17 01:17:50.355555 ignition[791]: Stage: disks Jan 17 01:17:50.355803 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:17:50.358030 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 01:17:50.355823 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:17:50.356874 ignition[791]: disks: disks passed Jan 17 01:17:50.356944 ignition[791]: Ignition finished successfully Jan 17 01:17:50.362374 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 01:17:50.363819 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 01:17:50.365443 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 01:17:50.367016 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 01:17:50.368465 systemd[1]: Reached target basic.target - Basic System. Jan 17 01:17:50.374474 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 01:17:50.404639 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 01:17:50.408884 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 01:17:50.416416 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 01:17:50.538311 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 01:17:50.539186 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 01:17:50.540854 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 01:17:50.549442 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 01:17:50.553484 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 01:17:50.554729 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 01:17:50.558696 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 17 01:17:50.559667 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 01:17:50.560405 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 01:17:50.567276 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 01:17:50.573355 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (807) Jan 17 01:17:50.574310 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:17:50.574345 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:17:50.574365 kernel: BTRFS info (device vda6): using free space tree Jan 17 01:17:50.577503 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 01:17:50.587344 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 01:17:50.592625 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 01:17:50.678642 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 01:17:50.691666 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 17 01:17:50.697833 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 01:17:50.705243 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 01:17:50.823872 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 01:17:50.830426 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 01:17:50.839543 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 01:17:50.850597 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 01:17:50.853119 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:17:50.874341 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 01:17:50.887894 ignition[924]: INFO : Ignition 2.19.0 Jan 17 01:17:50.887894 ignition[924]: INFO : Stage: mount Jan 17 01:17:50.890728 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 01:17:50.890728 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:17:50.890728 ignition[924]: INFO : mount: mount passed Jan 17 01:17:50.890728 ignition[924]: INFO : Ignition finished successfully Jan 17 01:17:50.890523 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 01:17:51.055689 systemd-networkd[770]: eth0: Gained IPv6LL Jan 17 01:17:52.565809 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:214:24:19ff:fef4:852/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:214:24:19ff:fef4:852/64 assigned by NDisc. Jan 17 01:17:52.565829 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 01:17:57.756603 coreos-metadata[809]: Jan 17 01:17:57.756 WARN failed to locate config-drive, using the metadata service API instead Jan 17 01:17:57.780601 coreos-metadata[809]: Jan 17 01:17:57.780 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 01:17:57.846178 coreos-metadata[809]: Jan 17 01:17:57.846 INFO Fetch successful Jan 17 01:17:57.846178 coreos-metadata[809]: Jan 17 01:17:57.846 INFO wrote hostname srv-3374x.gb1.brightbox.com to /sysroot/etc/hostname Jan 17 01:17:57.847641 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 17 01:17:57.847830 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 17 01:17:57.858568 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 01:17:57.890632 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 01:17:57.905290 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Jan 17 01:17:57.905364 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:17:57.908445 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:17:57.908532 kernel: BTRFS info (device vda6): using free space tree Jan 17 01:17:57.914296 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 01:17:57.917346 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 01:17:57.944944 ignition[959]: INFO : Ignition 2.19.0 Jan 17 01:17:57.944944 ignition[959]: INFO : Stage: files Jan 17 01:17:57.946793 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 01:17:57.946793 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:17:57.946793 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 17 01:17:57.949642 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 01:17:57.949642 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 01:17:57.951705 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 01:17:57.951705 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 01:17:57.953626 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 01:17:57.952109 unknown[959]: wrote ssh authorized keys file for user: core Jan 17 01:17:57.955717 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 01:17:57.955717 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 01:17:58.216344 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 01:17:58.472662 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 01:17:58.472662 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 01:17:58.481985 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 01:17:58.979979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 01:18:00.327360 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 01:18:00.327360 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 01:18:00.332545 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 01:18:00.332545 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 01:18:00.332545 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 01:18:00.332545 ignition[959]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 01:18:00.332545 ignition[959]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 01:18:00.332545 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 01:18:00.332545 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 01:18:00.332545 ignition[959]: INFO : files: files passed Jan 17 01:18:00.332545 ignition[959]: INFO : Ignition finished successfully Jan 17 01:18:00.335782 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 01:18:00.346548 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 01:18:00.352694 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 01:18:00.355047 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 01:18:00.356041 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 01:18:00.377165 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 01:18:00.377165 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 01:18:00.379723 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 01:18:00.381721 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 01:18:00.383182 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 01:18:00.400699 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 01:18:00.433152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 01:18:00.434249 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 01:18:00.436697 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 01:18:00.437511 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 01:18:00.439236 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 01:18:00.445531 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 01:18:00.465099 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 01:18:00.473538 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 01:18:00.488558 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 01:18:00.489578 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 01:18:00.491299 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 01:18:00.492759 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 01:18:00.492943 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 01:18:00.494692 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 01:18:00.495600 systemd[1]: Stopped target basic.target - Basic System. Jan 17 01:18:00.497159 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 01:18:00.498592 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 01:18:00.499992 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 01:18:00.501573 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 01:18:00.503098 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 01:18:00.504749 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 01:18:00.506210 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 01:18:00.507892 systemd[1]: Stopped target swap.target - Swaps. Jan 17 01:18:00.509259 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 01:18:00.509483 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 01:18:00.511240 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 01:18:00.512302 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 01:18:00.513702 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 01:18:00.513887 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 01:18:00.515247 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 01:18:00.515435 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 01:18:00.517534 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 01:18:00.517754 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 01:18:00.519465 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 01:18:00.519628 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 01:18:00.529217 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 01:18:00.531389 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 01:18:00.531670 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 01:18:00.540829 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 01:18:00.542399 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 01:18:00.542621 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 01:18:00.545432 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 01:18:00.545616 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 01:18:00.562805 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 01:18:00.564356 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 01:18:00.569306 ignition[1012]: INFO : Ignition 2.19.0 Jan 17 01:18:00.569306 ignition[1012]: INFO : Stage: umount Jan 17 01:18:00.572582 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 01:18:00.572582 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:18:00.576205 ignition[1012]: INFO : umount: umount passed Jan 17 01:18:00.576205 ignition[1012]: INFO : Ignition finished successfully Jan 17 01:18:00.577783 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 01:18:00.579340 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 01:18:00.581852 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 01:18:00.581955 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 01:18:00.583775 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 01:18:00.583844 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 01:18:00.588060 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 01:18:00.588151 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 01:18:00.588917 systemd[1]: Stopped target network.target - Network. Jan 17 01:18:00.589619 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 01:18:00.589712 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 01:18:00.591144 systemd[1]: Stopped target paths.target - Path Units. Jan 17 01:18:00.592555 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 01:18:00.597341 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 01:18:00.598136 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 01:18:00.599860 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 01:18:00.601326 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 01:18:00.601398 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 01:18:00.602707 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 01:18:00.602771 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 01:18:00.604054 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 01:18:00.604144 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 01:18:00.605558 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 01:18:00.605645 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 01:18:00.607327 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 01:18:00.609293 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 01:18:00.612586 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 01:18:00.613350 systemd-networkd[770]: eth0: DHCPv6 lease lost Jan 17 01:18:00.616009 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 01:18:00.616206 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 01:18:00.618398 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 01:18:00.618611 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 01:18:00.623194 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 01:18:00.623347 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 01:18:00.642512 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 01:18:00.643674 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 01:18:00.643760 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 01:18:00.644598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 01:18:00.644667 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:18:00.645676 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 01:18:00.645760 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 01:18:00.647084 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 01:18:00.647168 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 01:18:00.648904 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 01:18:00.661667 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 01:18:00.661933 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 01:18:00.666571 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 01:18:00.666707 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 01:18:00.668579 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 01:18:00.668641 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 01:18:00.670182 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 01:18:00.670254 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 01:18:00.674592 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 01:18:00.674677 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 01:18:00.676197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 01:18:00.676338 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:18:00.693577 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 01:18:00.694414 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 01:18:00.694506 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 01:18:00.695366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 01:18:00.695461 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:18:00.697705 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 01:18:00.698953 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 01:18:00.702627 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 01:18:00.702786 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 01:18:00.716175 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 01:18:00.716388 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 01:18:00.718132 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 01:18:00.719487 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 01:18:00.719568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 01:18:00.728548 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 01:18:00.739447 systemd[1]: Switching root. Jan 17 01:18:00.776379 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Jan 17 01:18:00.776489 systemd-journald[202]: Journal stopped Jan 17 01:18:02.233865 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 01:18:02.233983 kernel: SELinux: policy capability open_perms=1 Jan 17 01:18:02.234007 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 01:18:02.234026 kernel: SELinux: policy capability always_check_network=0 Jan 17 01:18:02.234046 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 01:18:02.234072 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 01:18:02.234098 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 01:18:02.234117 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 01:18:02.234136 kernel: audit: type=1403 audit(1768612681.009:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 01:18:02.234176 systemd[1]: Successfully loaded SELinux policy in 48.026ms. Jan 17 01:18:02.234217 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.346ms. Jan 17 01:18:02.234247 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 01:18:02.234286 systemd[1]: Detected virtualization kvm. Jan 17 01:18:02.234308 systemd[1]: Detected architecture x86-64. Jan 17 01:18:02.234329 systemd[1]: Detected first boot. Jan 17 01:18:02.234350 systemd[1]: Hostname set to . Jan 17 01:18:02.234370 systemd[1]: Initializing machine ID from VM UUID. Jan 17 01:18:02.234405 zram_generator::config[1055]: No configuration found. Jan 17 01:18:02.234442 systemd[1]: Populated /etc with preset unit settings. Jan 17 01:18:02.234474 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 01:18:02.234496 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 01:18:02.234518 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 01:18:02.234539 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 01:18:02.234560 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 01:18:02.234580 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 01:18:02.234600 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 01:18:02.234632 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 01:18:02.234657 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 01:18:02.234678 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 01:18:02.234698 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 01:18:02.234718 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 01:18:02.234740 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 01:18:02.234761 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 01:18:02.234781 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 01:18:02.234802 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 01:18:02.234837 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 01:18:02.234859 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 01:18:02.234880 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 01:18:02.234900 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 01:18:02.234921 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 01:18:02.234942 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 01:18:02.234975 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 01:18:02.234999 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 01:18:02.235019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 01:18:02.235039 systemd[1]: Reached target slices.target - Slice Units. Jan 17 01:18:02.235059 systemd[1]: Reached target swap.target - Swaps. Jan 17 01:18:02.235080 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 01:18:02.235100 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 01:18:02.235121 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 01:18:02.235141 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 01:18:02.235174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 01:18:02.235213 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 01:18:02.235247 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 01:18:02.235312 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 01:18:02.235336 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 01:18:02.235356 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:18:02.235401 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 01:18:02.235425 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 01:18:02.235446 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 01:18:02.235469 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 01:18:02.235490 systemd[1]: Reached target machines.target - Containers. Jan 17 01:18:02.235511 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 01:18:02.235532 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:18:02.235553 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 01:18:02.235582 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 01:18:02.235617 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 01:18:02.235639 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 01:18:02.235659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 01:18:02.235679 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 01:18:02.235700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 01:18:02.235721 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 01:18:02.235743 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 01:18:02.235764 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 01:18:02.235797 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 01:18:02.235818 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 01:18:02.235838 kernel: fuse: init (API version 7.39) Jan 17 01:18:02.235858 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 01:18:02.235879 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 01:18:02.235900 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 01:18:02.235921 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 01:18:02.235941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 01:18:02.235967 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 01:18:02.236001 kernel: ACPI: bus type drm_connector registered Jan 17 01:18:02.236023 systemd[1]: Stopped verity-setup.service. Jan 17 01:18:02.236045 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:18:02.236066 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 01:18:02.236086 kernel: loop: module loaded Jan 17 01:18:02.236113 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 01:18:02.236134 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 01:18:02.236155 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 01:18:02.236188 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 01:18:02.236210 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 01:18:02.236230 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 01:18:02.236251 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 01:18:02.236321 systemd-journald[1144]: Collecting audit messages is disabled. Jan 17 01:18:02.236388 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 01:18:02.236414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 01:18:02.236435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 01:18:02.236456 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 01:18:02.236477 systemd-journald[1144]: Journal started Jan 17 01:18:02.236521 systemd-journald[1144]: Runtime Journal (/run/log/journal/4e5255a6ebde4f5abdd5b868cf4acc0c) is 4.7M, max 38.0M, 33.2M free. Jan 17 01:18:01.807288 systemd[1]: Queued start job for default target multi-user.target. Jan 17 01:18:01.830505 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 01:18:01.831210 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 01:18:02.242301 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 01:18:02.242348 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 01:18:02.246070 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 01:18:02.247350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 01:18:02.247576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 01:18:02.248811 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 01:18:02.249017 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 01:18:02.250171 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 01:18:02.250469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 01:18:02.251562 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 01:18:02.252742 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 01:18:02.254006 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 01:18:02.270592 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 01:18:02.282351 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 01:18:02.292030 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 01:18:02.294358 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 01:18:02.294426 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 01:18:02.298028 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 01:18:02.303477 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 01:18:02.311900 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 01:18:02.313835 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:18:02.322504 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 01:18:02.326615 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 01:18:02.327899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 01:18:02.332832 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 01:18:02.334250 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 01:18:02.336747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 01:18:02.346185 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 01:18:02.349625 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 01:18:02.353698 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 01:18:02.354681 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 01:18:02.355799 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 01:18:02.381878 systemd-journald[1144]: Time spent on flushing to /var/log/journal/4e5255a6ebde4f5abdd5b868cf4acc0c is 155.027ms for 1138 entries. Jan 17 01:18:02.381878 systemd-journald[1144]: System Journal (/var/log/journal/4e5255a6ebde4f5abdd5b868cf4acc0c) is 8.0M, max 584.8M, 576.8M free. Jan 17 01:18:02.588124 systemd-journald[1144]: Received client request to flush runtime journal. Jan 17 01:18:02.588475 kernel: loop0: detected capacity change from 0 to 8 Jan 17 01:18:02.588514 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 01:18:02.588539 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 01:18:02.419469 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 01:18:02.427104 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 01:18:02.438919 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 01:18:02.440054 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 01:18:02.451512 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 01:18:02.511994 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:18:02.535451 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 01:18:02.538343 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 01:18:02.540194 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 01:18:02.546716 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 17 01:18:02.546737 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 17 01:18:02.554424 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 01:18:02.569699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 01:18:02.590860 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 01:18:02.596302 kernel: loop2: detected capacity change from 0 to 219144 Jan 17 01:18:02.612855 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 01:18:02.646436 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 01:18:02.701327 kernel: loop4: detected capacity change from 0 to 8 Jan 17 01:18:02.707292 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 01:18:02.731316 kernel: loop6: detected capacity change from 0 to 219144 Jan 17 01:18:02.757511 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 01:18:02.787014 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 17 01:18:02.793384 (sd-merge)[1214]: Merged extensions into '/usr'. Jan 17 01:18:02.803918 systemd[1]: Reloading requested from client PID 1188 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 01:18:02.803942 systemd[1]: Reloading... Jan 17 01:18:02.950437 zram_generator::config[1240]: No configuration found. Jan 17 01:18:03.148643 ldconfig[1183]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 01:18:03.150385 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:18:03.217337 systemd[1]: Reloading finished in 412 ms. Jan 17 01:18:03.244021 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 01:18:03.246863 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 01:18:03.259565 systemd[1]: Starting ensure-sysext.service... Jan 17 01:18:03.263751 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 01:18:03.275342 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Jan 17 01:18:03.275378 systemd[1]: Reloading... Jan 17 01:18:03.323501 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 01:18:03.324072 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 01:18:03.325525 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 01:18:03.325936 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jan 17 01:18:03.326055 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jan 17 01:18:03.337310 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 01:18:03.337328 systemd-tmpfiles[1297]: Skipping /boot Jan 17 01:18:03.373531 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 01:18:03.373552 systemd-tmpfiles[1297]: Skipping /boot Jan 17 01:18:03.422294 zram_generator::config[1327]: No configuration found. Jan 17 01:18:03.596610 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:18:03.663434 systemd[1]: Reloading finished in 387 ms. Jan 17 01:18:03.690873 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 01:18:03.700114 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 01:18:03.724563 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 01:18:03.729500 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 01:18:03.735082 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 01:18:03.744519 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 01:18:03.754552 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 01:18:03.762989 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 01:18:03.771839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:18:03.772568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:18:03.777873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 01:18:03.783626 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 01:18:03.787559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 01:18:03.790196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:18:03.790708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:18:03.806780 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 01:18:03.812242 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:18:03.812836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:18:03.813078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:18:03.813216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:18:03.818859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:18:03.819164 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:18:03.828655 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 01:18:03.829596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:18:03.829779 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:18:03.840902 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Jan 17 01:18:03.843652 systemd[1]: Finished ensure-sysext.service. Jan 17 01:18:03.845604 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 01:18:03.849975 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 01:18:03.851730 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 01:18:03.873529 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 01:18:03.880231 augenrules[1410]: No rules Jan 17 01:18:03.884880 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 01:18:03.885120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 01:18:03.887468 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 01:18:03.888633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 01:18:03.888856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 01:18:03.892036 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 01:18:03.896837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 01:18:03.897575 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 01:18:03.900134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 01:18:03.911834 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 01:18:03.913096 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 01:18:03.915987 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 01:18:03.924590 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 01:18:03.931405 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 01:18:03.944597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 01:18:03.956817 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 01:18:03.960451 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 01:18:04.068450 systemd-networkd[1426]: lo: Link UP Jan 17 01:18:04.068464 systemd-networkd[1426]: lo: Gained carrier Jan 17 01:18:04.069541 systemd-networkd[1426]: Enumeration completed Jan 17 01:18:04.069715 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 01:18:04.081535 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 01:18:04.134924 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 01:18:04.135905 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 01:18:04.154139 systemd-resolved[1390]: Positive Trust Anchors: Jan 17 01:18:04.154169 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 01:18:04.154234 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 01:18:04.169326 systemd-resolved[1390]: Using system hostname 'srv-3374x.gb1.brightbox.com'. Jan 17 01:18:04.174066 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 01:18:04.175471 systemd[1]: Reached target network.target - Network. Jan 17 01:18:04.176133 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 01:18:04.191454 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 01:18:04.202306 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1438) Jan 17 01:18:04.217338 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 01:18:04.255060 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:18:04.255075 systemd-networkd[1426]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 01:18:04.258180 systemd-networkd[1426]: eth0: Link UP Jan 17 01:18:04.258194 systemd-networkd[1426]: eth0: Gained carrier Jan 17 01:18:04.258214 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:18:04.271528 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 01:18:04.276364 systemd-networkd[1426]: eth0: DHCPv4 address 10.244.8.82/30, gateway 10.244.8.81 acquired from 10.244.8.81 Jan 17 01:18:04.277539 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Jan 17 01:18:04.297818 kernel: ACPI: button: Power Button [PWRF] Jan 17 01:18:04.329468 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 01:18:04.336533 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 01:18:04.365299 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 01:18:04.370350 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 01:18:04.374209 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 01:18:04.377937 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 01:18:04.378493 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 01:18:04.465047 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:18:04.564493 systemd-timesyncd[1411]: Contacted time server 131.111.8.61:123 (0.flatcar.pool.ntp.org). Jan 17 01:18:04.564721 systemd-timesyncd[1411]: Initial clock synchronization to Sat 2026-01-17 01:18:04.612333 UTC. Jan 17 01:18:04.635213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:18:04.649888 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 01:18:04.656525 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 01:18:04.682417 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 01:18:04.714251 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 01:18:04.715505 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 01:18:04.716259 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 01:18:04.717299 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 01:18:04.718138 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 01:18:04.719595 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 01:18:04.720538 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 01:18:04.721327 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 01:18:04.722203 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 01:18:04.722262 systemd[1]: Reached target paths.target - Path Units. Jan 17 01:18:04.722938 systemd[1]: Reached target timers.target - Timer Units. Jan 17 01:18:04.724858 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 01:18:04.727770 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 01:18:04.733870 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 01:18:04.738512 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 01:18:04.742228 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 01:18:04.743344 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 01:18:04.744010 systemd[1]: Reached target basic.target - Basic System. Jan 17 01:18:04.744736 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 01:18:04.744777 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 01:18:04.751783 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 01:18:04.762895 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 01:18:04.770875 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 01:18:04.780452 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 01:18:04.787128 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 01:18:04.791547 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 01:18:04.792879 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 01:18:04.796550 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 01:18:04.800252 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 01:18:04.808535 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 01:18:04.811601 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 01:18:04.822153 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 01:18:04.823811 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 01:18:04.826326 jq[1478]: false Jan 17 01:18:04.825208 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 01:18:04.832158 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 01:18:04.838484 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 01:18:04.844737 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 01:18:04.845034 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 01:18:04.856868 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 01:18:04.857196 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 01:18:04.892218 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 01:18:04.900368 jq[1487]: true Jan 17 01:18:04.904485 extend-filesystems[1479]: Found loop4 Jan 17 01:18:04.904485 extend-filesystems[1479]: Found loop5 Jan 17 01:18:04.904485 extend-filesystems[1479]: Found loop6 Jan 17 01:18:04.904485 extend-filesystems[1479]: Found loop7 Jan 17 01:18:04.904485 extend-filesystems[1479]: Found vda Jan 17 01:18:04.904485 extend-filesystems[1479]: Found vda1 Jan 17 01:18:04.904485 extend-filesystems[1479]: Found vda2 Jan 17 01:18:04.904485 extend-filesystems[1479]: Found vda3 Jan 17 01:18:04.904485 extend-filesystems[1479]: Found usr Jan 17 01:18:04.904485 extend-filesystems[1479]: Found vda4 Jan 17 01:18:04.904485 extend-filesystems[1479]: Found vda6 Jan 17 01:18:04.904485 extend-filesystems[1479]: Found vda7 Jan 17 01:18:04.904485 extend-filesystems[1479]: Found vda9 Jan 17 01:18:04.904485 extend-filesystems[1479]: Checking size of /dev/vda9 Jan 17 01:18:04.902067 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 01:18:04.992752 tar[1494]: linux-amd64/LICENSE Jan 17 01:18:04.992752 tar[1494]: linux-amd64/helm Jan 17 01:18:04.956061 dbus-daemon[1476]: [system] SELinux support is enabled Jan 17 01:18:05.006234 update_engine[1485]: I20260117 01:18:04.944046 1485 main.cc:92] Flatcar Update Engine starting Jan 17 01:18:05.006234 update_engine[1485]: I20260117 01:18:04.968578 1485 update_check_scheduler.cc:74] Next update check in 3m16s Jan 17 01:18:04.902390 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 01:18:04.960420 dbus-daemon[1476]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1426 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 01:18:04.931183 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 01:18:04.964308 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 01:18:04.937996 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 01:18:04.956463 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 01:18:04.962614 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 01:18:04.962660 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 01:18:04.975561 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 01:18:04.975599 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 01:18:04.979228 systemd[1]: Started update-engine.service - Update Engine. Jan 17 01:18:04.991304 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 01:18:05.004515 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 01:18:05.017654 jq[1507]: true Jan 17 01:18:05.023555 extend-filesystems[1479]: Resized partition /dev/vda9 Jan 17 01:18:05.032841 extend-filesystems[1519]: resize2fs 1.47.1 (20-May-2024) Jan 17 01:18:05.061672 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1439) Jan 17 01:18:05.096489 systemd-logind[1484]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 01:18:05.096532 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 01:18:05.097918 systemd-logind[1484]: New seat seat0. Jan 17 01:18:05.099717 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 01:18:05.114395 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 17 01:18:05.244218 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 17 01:18:05.245783 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 01:18:05.255676 systemd[1]: Starting sshkeys.service... Jan 17 01:18:05.309490 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 01:18:05.320700 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 01:18:05.397297 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 01:18:05.397646 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 01:18:05.400888 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1513 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 01:18:05.410683 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 01:18:05.422295 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 01:18:05.427204 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 01:18:05.441679 extend-filesystems[1519]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 01:18:05.441679 extend-filesystems[1519]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 01:18:05.441679 extend-filesystems[1519]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 01:18:05.452503 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Jan 17 01:18:05.443177 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 01:18:05.444362 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 01:18:05.456164 polkitd[1548]: Started polkitd version 121 Jan 17 01:18:05.473830 polkitd[1548]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 01:18:05.473938 polkitd[1548]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 01:18:05.480805 polkitd[1548]: Finished loading, compiling and executing 2 rules Jan 17 01:18:05.482625 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 01:18:05.482881 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 01:18:05.483409 polkitd[1548]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 01:18:05.516745 systemd-hostnamed[1513]: Hostname set to (static) Jan 17 01:18:05.546358 containerd[1503]: time="2026-01-17T01:18:05.545007183Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 01:18:05.624758 containerd[1503]: time="2026-01-17T01:18:05.622956320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:18:05.628685 containerd[1503]: time="2026-01-17T01:18:05.628639861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:18:05.629325 containerd[1503]: time="2026-01-17T01:18:05.629295969Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 01:18:05.629428 containerd[1503]: time="2026-01-17T01:18:05.629404298Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 01:18:05.630132 containerd[1503]: time="2026-01-17T01:18:05.630102960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 01:18:05.631052 containerd[1503]: time="2026-01-17T01:18:05.631023898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 01:18:05.631439 containerd[1503]: time="2026-01-17T01:18:05.631408098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:18:05.631926 containerd[1503]: time="2026-01-17T01:18:05.631899065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:18:05.632697 containerd[1503]: time="2026-01-17T01:18:05.632261926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:18:05.632866 containerd[1503]: time="2026-01-17T01:18:05.632839706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 01:18:05.634663 containerd[1503]: time="2026-01-17T01:18:05.633315516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:18:05.634663 containerd[1503]: time="2026-01-17T01:18:05.633345885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 01:18:05.634663 containerd[1503]: time="2026-01-17T01:18:05.633486867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:18:05.634663 containerd[1503]: time="2026-01-17T01:18:05.633884753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:18:05.634663 containerd[1503]: time="2026-01-17T01:18:05.634023532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:18:05.634663 containerd[1503]: time="2026-01-17T01:18:05.634049752Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 01:18:05.634663 containerd[1503]: time="2026-01-17T01:18:05.634193836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 01:18:05.636348 containerd[1503]: time="2026-01-17T01:18:05.636319789Z" level=info msg="metadata content store policy set" policy=shared Jan 17 01:18:05.641470 containerd[1503]: time="2026-01-17T01:18:05.641436864Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 01:18:05.641629 containerd[1503]: time="2026-01-17T01:18:05.641601922Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 01:18:05.641837 containerd[1503]: time="2026-01-17T01:18:05.641810056Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 01:18:05.642326 containerd[1503]: time="2026-01-17T01:18:05.642299747Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 01:18:05.642472 containerd[1503]: time="2026-01-17T01:18:05.642445140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 01:18:05.642767 containerd[1503]: time="2026-01-17T01:18:05.642739855Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 01:18:05.643428 containerd[1503]: time="2026-01-17T01:18:05.643391787Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 01:18:05.643626 containerd[1503]: time="2026-01-17T01:18:05.643596042Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 01:18:05.643685 containerd[1503]: time="2026-01-17T01:18:05.643631249Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 01:18:05.643685 containerd[1503]: time="2026-01-17T01:18:05.643653416Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 01:18:05.643745 containerd[1503]: time="2026-01-17T01:18:05.643686702Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 01:18:05.643745 containerd[1503]: time="2026-01-17T01:18:05.643714766Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 01:18:05.643834 containerd[1503]: time="2026-01-17T01:18:05.643740421Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 01:18:05.643834 containerd[1503]: time="2026-01-17T01:18:05.643763953Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 01:18:05.643834 containerd[1503]: time="2026-01-17T01:18:05.643795866Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 01:18:05.643834 containerd[1503]: time="2026-01-17T01:18:05.643820741Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 01:18:05.643957 containerd[1503]: time="2026-01-17T01:18:05.643840094Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 01:18:05.643957 containerd[1503]: time="2026-01-17T01:18:05.643857074Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 01:18:05.643957 containerd[1503]: time="2026-01-17T01:18:05.643906477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.643957 containerd[1503]: time="2026-01-17T01:18:05.643932997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.643957 containerd[1503]: time="2026-01-17T01:18:05.643952802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644147 containerd[1503]: time="2026-01-17T01:18:05.643973503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644147 containerd[1503]: time="2026-01-17T01:18:05.643993157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644147 containerd[1503]: time="2026-01-17T01:18:05.644012386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644147 containerd[1503]: time="2026-01-17T01:18:05.644032770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644147 containerd[1503]: time="2026-01-17T01:18:05.644052105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644147 containerd[1503]: time="2026-01-17T01:18:05.644105975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644147 containerd[1503]: time="2026-01-17T01:18:05.644141169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644425 containerd[1503]: time="2026-01-17T01:18:05.644160093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644425 containerd[1503]: time="2026-01-17T01:18:05.644178404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644425 containerd[1503]: time="2026-01-17T01:18:05.644198665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644425 containerd[1503]: time="2026-01-17T01:18:05.644230747Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 01:18:05.644425 containerd[1503]: time="2026-01-17T01:18:05.644304517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644425 containerd[1503]: time="2026-01-17T01:18:05.644354397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644425 containerd[1503]: time="2026-01-17T01:18:05.644375608Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 01:18:05.644640 containerd[1503]: time="2026-01-17T01:18:05.644446469Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 01:18:05.644640 containerd[1503]: time="2026-01-17T01:18:05.644480215Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 01:18:05.644640 containerd[1503]: time="2026-01-17T01:18:05.644500037Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 01:18:05.644640 containerd[1503]: time="2026-01-17T01:18:05.644518735Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 01:18:05.644640 containerd[1503]: time="2026-01-17T01:18:05.644534705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.644640 containerd[1503]: time="2026-01-17T01:18:05.644553082Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 01:18:05.644640 containerd[1503]: time="2026-01-17T01:18:05.644580086Z" level=info msg="NRI interface is disabled by configuration." Jan 17 01:18:05.644640 containerd[1503]: time="2026-01-17T01:18:05.644600474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 01:18:05.646300 containerd[1503]: time="2026-01-17T01:18:05.645047257Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 01:18:05.646300 containerd[1503]: time="2026-01-17T01:18:05.645146254Z" level=info msg="Connect containerd service" Jan 17 01:18:05.646300 containerd[1503]: time="2026-01-17T01:18:05.645214418Z" level=info msg="using legacy CRI server" Jan 17 01:18:05.646300 containerd[1503]: time="2026-01-17T01:18:05.645238757Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 01:18:05.649774 containerd[1503]: time="2026-01-17T01:18:05.649617995Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 01:18:05.652588 containerd[1503]: time="2026-01-17T01:18:05.652551932Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 01:18:05.653530 containerd[1503]: time="2026-01-17T01:18:05.652743254Z" level=info msg="Start subscribing containerd event" Jan 17 01:18:05.653530 containerd[1503]: time="2026-01-17T01:18:05.652839942Z" level=info msg="Start recovering state" Jan 17 01:18:05.653530 containerd[1503]: time="2026-01-17T01:18:05.652969426Z" level=info msg="Start event monitor" Jan 17 01:18:05.653530 containerd[1503]: time="2026-01-17T01:18:05.653010657Z" level=info msg="Start snapshots syncer" Jan 17 01:18:05.653530 containerd[1503]: time="2026-01-17T01:18:05.653030251Z" level=info msg="Start cni network conf syncer for default" Jan 17 01:18:05.653530 containerd[1503]: time="2026-01-17T01:18:05.653043869Z" level=info msg="Start streaming server" Jan 17 01:18:05.653772 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 01:18:05.655606 containerd[1503]: time="2026-01-17T01:18:05.654331122Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 01:18:05.655606 containerd[1503]: time="2026-01-17T01:18:05.654431541Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 01:18:05.654630 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 01:18:05.657629 containerd[1503]: time="2026-01-17T01:18:05.657328616Z" level=info msg="containerd successfully booted in 0.114330s" Jan 17 01:18:05.694155 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 01:18:05.703845 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 01:18:05.710700 systemd[1]: Started sshd@0-10.244.8.82:22-20.161.92.111:52006.service - OpenSSH per-connection server daemon (20.161.92.111:52006). Jan 17 01:18:05.724204 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 01:18:05.724599 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 01:18:05.738787 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 01:18:05.768995 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 01:18:05.776871 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 01:18:05.781624 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 01:18:05.782699 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 01:18:06.025222 tar[1494]: linux-amd64/README.md Jan 17 01:18:06.040536 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 01:18:06.159674 systemd-networkd[1426]: eth0: Gained IPv6LL Jan 17 01:18:06.164286 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 01:18:06.166323 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 01:18:06.174620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:18:06.177696 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 01:18:06.217702 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 01:18:06.298624 sshd[1571]: Accepted publickey for core from 20.161.92.111 port 52006 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:18:06.300520 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:18:06.316339 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 01:18:06.325663 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 01:18:06.334923 systemd-logind[1484]: New session 1 of user core. Jan 17 01:18:06.348813 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 01:18:06.359850 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 01:18:06.369498 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 01:18:06.514882 systemd[1597]: Queued start job for default target default.target. Jan 17 01:18:06.526390 systemd[1597]: Created slice app.slice - User Application Slice. Jan 17 01:18:06.526434 systemd[1597]: Reached target paths.target - Paths. Jan 17 01:18:06.526457 systemd[1597]: Reached target timers.target - Timers. Jan 17 01:18:06.529456 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 01:18:06.554203 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 01:18:06.554636 systemd[1597]: Reached target sockets.target - Sockets. Jan 17 01:18:06.554669 systemd[1597]: Reached target basic.target - Basic System. Jan 17 01:18:06.554739 systemd[1597]: Reached target default.target - Main User Target. Jan 17 01:18:06.554805 systemd[1597]: Startup finished in 175ms. Jan 17 01:18:06.554825 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 01:18:06.567604 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 01:18:07.059583 systemd[1]: Started sshd@1-10.244.8.82:22-20.161.92.111:34060.service - OpenSSH per-connection server daemon (20.161.92.111:34060). Jan 17 01:18:07.203942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:18:07.220332 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:18:07.290192 systemd-networkd[1426]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:214:24:19ff:fef4:852/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:214:24:19ff:fef4:852/64 assigned by NDisc. Jan 17 01:18:07.290204 systemd-networkd[1426]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 01:18:07.622690 sshd[1609]: Accepted publickey for core from 20.161.92.111 port 34060 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:18:07.624129 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:18:07.634343 systemd-logind[1484]: New session 2 of user core. Jan 17 01:18:07.639720 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 01:18:07.772459 kubelet[1616]: E0117 01:18:07.772394 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:18:07.775663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:18:07.775952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:18:07.776608 systemd[1]: kubelet.service: Consumed 1.022s CPU time. Jan 17 01:18:08.026124 sshd[1609]: pam_unix(sshd:session): session closed for user core Jan 17 01:18:08.032551 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Jan 17 01:18:08.032928 systemd[1]: sshd@1-10.244.8.82:22-20.161.92.111:34060.service: Deactivated successfully. Jan 17 01:18:08.035099 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 01:18:08.036401 systemd-logind[1484]: Removed session 2. Jan 17 01:18:08.126742 systemd[1]: Started sshd@2-10.244.8.82:22-20.161.92.111:34066.service - OpenSSH per-connection server daemon (20.161.92.111:34066). Jan 17 01:18:08.696028 sshd[1630]: Accepted publickey for core from 20.161.92.111 port 34066 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:18:08.698598 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:18:08.705965 systemd-logind[1484]: New session 3 of user core. Jan 17 01:18:08.715636 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 01:18:09.103214 sshd[1630]: pam_unix(sshd:session): session closed for user core Jan 17 01:18:09.108514 systemd[1]: sshd@2-10.244.8.82:22-20.161.92.111:34066.service: Deactivated successfully. Jan 17 01:18:09.111042 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 01:18:09.113237 systemd-logind[1484]: Session 3 logged out. Waiting for processes to exit. Jan 17 01:18:09.114931 systemd-logind[1484]: Removed session 3. Jan 17 01:18:10.835578 login[1579]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 01:18:10.844529 systemd-logind[1484]: New session 4 of user core. Jan 17 01:18:10.848864 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 01:18:10.849306 login[1578]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 01:18:10.859098 systemd-logind[1484]: New session 5 of user core. Jan 17 01:18:10.864786 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 01:18:11.842014 coreos-metadata[1474]: Jan 17 01:18:11.841 WARN failed to locate config-drive, using the metadata service API instead Jan 17 01:18:11.867593 coreos-metadata[1474]: Jan 17 01:18:11.867 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 17 01:18:11.911653 coreos-metadata[1474]: Jan 17 01:18:11.911 INFO Fetch failed with 404: resource not found Jan 17 01:18:11.911653 coreos-metadata[1474]: Jan 17 01:18:11.911 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 01:18:11.927289 coreos-metadata[1474]: Jan 17 01:18:11.927 INFO Fetch successful Jan 17 01:18:11.927547 coreos-metadata[1474]: Jan 17 01:18:11.927 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 17 01:18:11.954078 coreos-metadata[1474]: Jan 17 01:18:11.953 INFO Fetch successful Jan 17 01:18:11.954345 coreos-metadata[1474]: Jan 17 01:18:11.954 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 17 01:18:11.995405 coreos-metadata[1474]: Jan 17 01:18:11.995 INFO Fetch successful Jan 17 01:18:11.995686 coreos-metadata[1474]: Jan 17 01:18:11.995 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 17 01:18:12.024028 coreos-metadata[1474]: Jan 17 01:18:12.023 INFO Fetch successful Jan 17 01:18:12.024028 coreos-metadata[1474]: Jan 17 01:18:12.023 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 17 01:18:12.129730 coreos-metadata[1474]: Jan 17 01:18:12.129 INFO Fetch successful Jan 17 01:18:12.163644 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 01:18:12.164821 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 01:18:12.434683 coreos-metadata[1544]: Jan 17 01:18:12.434 WARN failed to locate config-drive, using the metadata service API instead Jan 17 01:18:12.457365 coreos-metadata[1544]: Jan 17 01:18:12.457 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 17 01:18:12.481208 coreos-metadata[1544]: Jan 17 01:18:12.481 INFO Fetch successful Jan 17 01:18:12.481503 coreos-metadata[1544]: Jan 17 01:18:12.481 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 01:18:12.531361 coreos-metadata[1544]: Jan 17 01:18:12.531 INFO Fetch successful Jan 17 01:18:12.533449 unknown[1544]: wrote ssh authorized keys file for user: core Jan 17 01:18:12.552334 update-ssh-keys[1673]: Updated "/home/core/.ssh/authorized_keys" Jan 17 01:18:12.553166 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 01:18:12.555848 systemd[1]: Finished sshkeys.service. Jan 17 01:18:12.559759 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 01:18:12.560422 systemd[1]: Startup finished in 1.435s (kernel) + 14.259s (initrd) + 11.598s (userspace) = 27.294s. Jan 17 01:18:18.026431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 01:18:18.033563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:18:18.199891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:18:18.213957 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:18:18.327532 kubelet[1684]: E0117 01:18:18.327346 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:18:18.332180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:18:18.332452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:18:19.230693 systemd[1]: Started sshd@3-10.244.8.82:22-20.161.92.111:46752.service - OpenSSH per-connection server daemon (20.161.92.111:46752). Jan 17 01:18:19.810375 sshd[1692]: Accepted publickey for core from 20.161.92.111 port 46752 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:18:19.812454 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:18:19.818506 systemd-logind[1484]: New session 6 of user core. Jan 17 01:18:19.826483 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 01:18:20.222015 sshd[1692]: pam_unix(sshd:session): session closed for user core Jan 17 01:18:20.226192 systemd[1]: sshd@3-10.244.8.82:22-20.161.92.111:46752.service: Deactivated successfully. Jan 17 01:18:20.228379 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 01:18:20.229341 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Jan 17 01:18:20.231608 systemd-logind[1484]: Removed session 6. Jan 17 01:18:20.325474 systemd[1]: Started sshd@4-10.244.8.82:22-20.161.92.111:46754.service - OpenSSH per-connection server daemon (20.161.92.111:46754). Jan 17 01:18:20.900476 sshd[1699]: Accepted publickey for core from 20.161.92.111 port 46754 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:18:20.902787 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:18:20.909102 systemd-logind[1484]: New session 7 of user core. Jan 17 01:18:20.920584 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 01:18:21.299374 sshd[1699]: pam_unix(sshd:session): session closed for user core Jan 17 01:18:21.304080 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Jan 17 01:18:21.304499 systemd[1]: sshd@4-10.244.8.82:22-20.161.92.111:46754.service: Deactivated successfully. Jan 17 01:18:21.306885 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 01:18:21.308908 systemd-logind[1484]: Removed session 7. Jan 17 01:18:21.409183 systemd[1]: Started sshd@5-10.244.8.82:22-20.161.92.111:46758.service - OpenSSH per-connection server daemon (20.161.92.111:46758). Jan 17 01:18:21.968712 sshd[1706]: Accepted publickey for core from 20.161.92.111 port 46758 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:18:21.971083 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:18:21.978224 systemd-logind[1484]: New session 8 of user core. Jan 17 01:18:21.981475 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 01:18:22.375418 sshd[1706]: pam_unix(sshd:session): session closed for user core Jan 17 01:18:22.381845 systemd[1]: sshd@5-10.244.8.82:22-20.161.92.111:46758.service: Deactivated successfully. Jan 17 01:18:22.384236 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 01:18:22.385367 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Jan 17 01:18:22.387195 systemd-logind[1484]: Removed session 8. Jan 17 01:18:22.484874 systemd[1]: Started sshd@6-10.244.8.82:22-20.161.92.111:49180.service - OpenSSH per-connection server daemon (20.161.92.111:49180). Jan 17 01:18:23.043196 sshd[1713]: Accepted publickey for core from 20.161.92.111 port 49180 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:18:23.045187 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:18:23.053350 systemd-logind[1484]: New session 9 of user core. Jan 17 01:18:23.061559 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 01:18:23.371837 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 01:18:23.372362 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:18:23.388579 sudo[1716]: pam_unix(sudo:session): session closed for user root Jan 17 01:18:23.482322 sshd[1713]: pam_unix(sshd:session): session closed for user core Jan 17 01:18:23.487535 systemd[1]: sshd@6-10.244.8.82:22-20.161.92.111:49180.service: Deactivated successfully. Jan 17 01:18:23.490076 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 01:18:23.491170 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Jan 17 01:18:23.492819 systemd-logind[1484]: Removed session 9. Jan 17 01:18:23.593614 systemd[1]: Started sshd@7-10.244.8.82:22-20.161.92.111:49192.service - OpenSSH per-connection server daemon (20.161.92.111:49192). Jan 17 01:18:24.185545 sshd[1721]: Accepted publickey for core from 20.161.92.111 port 49192 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:18:24.187647 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:18:24.193942 systemd-logind[1484]: New session 10 of user core. Jan 17 01:18:24.205560 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 01:18:24.505559 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 01:18:24.506008 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:18:24.512113 sudo[1725]: pam_unix(sudo:session): session closed for user root Jan 17 01:18:24.520170 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 01:18:24.520670 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:18:24.537719 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 01:18:24.548801 auditctl[1728]: No rules Jan 17 01:18:24.549408 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 01:18:24.549748 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 01:18:24.557941 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 01:18:24.591913 augenrules[1747]: No rules Jan 17 01:18:24.593314 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 01:18:24.594576 sudo[1724]: pam_unix(sudo:session): session closed for user root Jan 17 01:18:24.687685 sshd[1721]: pam_unix(sshd:session): session closed for user core Jan 17 01:18:24.691586 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Jan 17 01:18:24.691924 systemd[1]: sshd@7-10.244.8.82:22-20.161.92.111:49192.service: Deactivated successfully. Jan 17 01:18:24.693948 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 01:18:24.695938 systemd-logind[1484]: Removed session 10. Jan 17 01:18:24.784115 systemd[1]: Started sshd@8-10.244.8.82:22-20.161.92.111:49208.service - OpenSSH per-connection server daemon (20.161.92.111:49208). Jan 17 01:18:25.357750 sshd[1755]: Accepted publickey for core from 20.161.92.111 port 49208 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:18:25.360156 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:18:25.367680 systemd-logind[1484]: New session 11 of user core. Jan 17 01:18:25.374500 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 01:18:25.674027 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 01:18:25.675238 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:18:26.131697 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 01:18:26.140868 (dockerd)[1773]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 01:18:26.584792 dockerd[1773]: time="2026-01-17T01:18:26.584685502Z" level=info msg="Starting up" Jan 17 01:18:26.700807 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1257379815-merged.mount: Deactivated successfully. Jan 17 01:18:26.765343 dockerd[1773]: time="2026-01-17T01:18:26.765285285Z" level=info msg="Loading containers: start." Jan 17 01:18:26.913323 kernel: Initializing XFRM netlink socket Jan 17 01:18:27.016422 systemd-networkd[1426]: docker0: Link UP Jan 17 01:18:27.071307 dockerd[1773]: time="2026-01-17T01:18:27.071213350Z" level=info msg="Loading containers: done." Jan 17 01:18:27.102247 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3305956309-merged.mount: Deactivated successfully. Jan 17 01:18:27.102875 dockerd[1773]: time="2026-01-17T01:18:27.102821217Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 01:18:27.102999 dockerd[1773]: time="2026-01-17T01:18:27.102970191Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 01:18:27.103183 dockerd[1773]: time="2026-01-17T01:18:27.103153478Z" level=info msg="Daemon has completed initialization" Jan 17 01:18:27.142210 dockerd[1773]: time="2026-01-17T01:18:27.141350871Z" level=info msg="API listen on /run/docker.sock" Jan 17 01:18:27.141685 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 01:18:28.385649 containerd[1503]: time="2026-01-17T01:18:28.384692710Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 01:18:28.560236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 01:18:28.569439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:18:28.739816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:18:28.756844 (kubelet)[1924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:18:28.813197 kubelet[1924]: E0117 01:18:28.813110 1924 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:18:28.816487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:18:28.816947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:18:29.468825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3139340003.mount: Deactivated successfully. Jan 17 01:18:31.383000 containerd[1503]: time="2026-01-17T01:18:31.382689889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:31.384608 containerd[1503]: time="2026-01-17T01:18:31.384292770Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068081" Jan 17 01:18:31.387289 containerd[1503]: time="2026-01-17T01:18:31.385482492Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:31.389549 containerd[1503]: time="2026-01-17T01:18:31.389512228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:31.391354 containerd[1503]: time="2026-01-17T01:18:31.391315731Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 3.006494146s" Jan 17 01:18:31.391521 containerd[1503]: time="2026-01-17T01:18:31.391491923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 17 01:18:31.392932 containerd[1503]: time="2026-01-17T01:18:31.392898360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 01:18:33.701177 containerd[1503]: time="2026-01-17T01:18:33.701056038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:33.703894 containerd[1503]: time="2026-01-17T01:18:33.703565340Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162448" Jan 17 01:18:33.704903 containerd[1503]: time="2026-01-17T01:18:33.704290009Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:33.708450 containerd[1503]: time="2026-01-17T01:18:33.708402832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:33.710326 containerd[1503]: time="2026-01-17T01:18:33.710283200Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.317311639s" Jan 17 01:18:33.710490 containerd[1503]: time="2026-01-17T01:18:33.710460039Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 17 01:18:33.713415 containerd[1503]: time="2026-01-17T01:18:33.713367067Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 01:18:35.498305 containerd[1503]: time="2026-01-17T01:18:35.496678597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:35.498305 containerd[1503]: time="2026-01-17T01:18:35.497668405Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725935" Jan 17 01:18:35.499430 containerd[1503]: time="2026-01-17T01:18:35.499393628Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:35.506813 containerd[1503]: time="2026-01-17T01:18:35.506763809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:35.508567 containerd[1503]: time="2026-01-17T01:18:35.508527911Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.794950547s" Jan 17 01:18:35.508717 containerd[1503]: time="2026-01-17T01:18:35.508688859Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 17 01:18:35.510197 containerd[1503]: time="2026-01-17T01:18:35.510152864Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 01:18:37.211331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546547634.mount: Deactivated successfully. Jan 17 01:18:37.317992 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 01:18:37.793676 containerd[1503]: time="2026-01-17T01:18:37.793585426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:37.795213 containerd[1503]: time="2026-01-17T01:18:37.794944488Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965301" Jan 17 01:18:37.796299 containerd[1503]: time="2026-01-17T01:18:37.796124085Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:37.798912 containerd[1503]: time="2026-01-17T01:18:37.798848433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:37.800295 containerd[1503]: time="2026-01-17T01:18:37.800103550Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.289738237s" Jan 17 01:18:37.800295 containerd[1503]: time="2026-01-17T01:18:37.800160443Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 17 01:18:37.801623 containerd[1503]: time="2026-01-17T01:18:37.800860812Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 01:18:38.431045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1969745427.mount: Deactivated successfully. Jan 17 01:18:39.059988 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 01:18:39.067567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:18:39.265675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:18:39.283007 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:18:39.433113 kubelet[2064]: E0117 01:18:39.432960 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:18:39.435598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:18:39.435825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:18:40.195741 containerd[1503]: time="2026-01-17T01:18:40.195629765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:40.198335 containerd[1503]: time="2026-01-17T01:18:40.198261339Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Jan 17 01:18:40.207511 containerd[1503]: time="2026-01-17T01:18:40.207430763Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:40.220660 containerd[1503]: time="2026-01-17T01:18:40.220573703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:40.222759 containerd[1503]: time="2026-01-17T01:18:40.222212513Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.421309245s" Jan 17 01:18:40.222759 containerd[1503]: time="2026-01-17T01:18:40.222278675Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 17 01:18:40.223128 containerd[1503]: time="2026-01-17T01:18:40.223097783Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 01:18:40.778081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2920252569.mount: Deactivated successfully. Jan 17 01:18:40.786070 containerd[1503]: time="2026-01-17T01:18:40.784769357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:40.786070 containerd[1503]: time="2026-01-17T01:18:40.786009066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Jan 17 01:18:40.787017 containerd[1503]: time="2026-01-17T01:18:40.786960772Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:40.793114 containerd[1503]: time="2026-01-17T01:18:40.792067490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:40.795421 containerd[1503]: time="2026-01-17T01:18:40.795383061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 572.150875ms" Jan 17 01:18:40.795631 containerd[1503]: time="2026-01-17T01:18:40.795599621Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 17 01:18:40.796698 containerd[1503]: time="2026-01-17T01:18:40.796664399Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 01:18:41.391584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273286777.mount: Deactivated successfully. Jan 17 01:18:45.716024 containerd[1503]: time="2026-01-17T01:18:45.715868465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:45.718258 containerd[1503]: time="2026-01-17T01:18:45.718188410Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166822" Jan 17 01:18:45.719788 containerd[1503]: time="2026-01-17T01:18:45.719713115Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:45.725288 containerd[1503]: time="2026-01-17T01:18:45.723859537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:18:45.725917 containerd[1503]: time="2026-01-17T01:18:45.725864521Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.929149025s" Jan 17 01:18:45.726044 containerd[1503]: time="2026-01-17T01:18:45.726017275Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 17 01:18:49.560149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 01:18:49.568526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:18:49.742476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:18:49.753991 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:18:49.878117 kubelet[2162]: E0117 01:18:49.877655 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:18:49.880611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:18:49.880857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:18:50.280560 update_engine[1485]: I20260117 01:18:50.280308 1485 update_attempter.cc:509] Updating boot flags... Jan 17 01:18:50.380575 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2176) Jan 17 01:18:50.448482 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2177) Jan 17 01:18:50.545288 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2177) Jan 17 01:18:53.077260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:18:53.085652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:18:53.144543 systemd[1]: Reloading requested from client PID 2191 ('systemctl') (unit session-11.scope)... Jan 17 01:18:53.144794 systemd[1]: Reloading... Jan 17 01:18:53.317528 zram_generator::config[2230]: No configuration found. Jan 17 01:18:53.472565 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:18:53.578920 systemd[1]: Reloading finished in 433 ms. Jan 17 01:18:53.658805 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 01:18:53.658933 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 01:18:53.659348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:18:53.669782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:18:53.838235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:18:53.857008 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 01:18:53.959340 kubelet[2296]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 01:18:53.959340 kubelet[2296]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 01:18:53.959933 kubelet[2296]: I0117 01:18:53.959430 2296 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 01:18:54.565857 kubelet[2296]: I0117 01:18:54.565726 2296 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 01:18:54.565857 kubelet[2296]: I0117 01:18:54.565771 2296 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 01:18:54.568130 kubelet[2296]: I0117 01:18:54.568043 2296 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 01:18:54.570295 kubelet[2296]: I0117 01:18:54.569074 2296 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 01:18:54.570295 kubelet[2296]: I0117 01:18:54.569433 2296 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 01:18:54.588412 kubelet[2296]: E0117 01:18:54.588357 2296 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.8.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.8.82:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 01:18:54.591772 kubelet[2296]: I0117 01:18:54.591737 2296 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 01:18:54.606682 kubelet[2296]: E0117 01:18:54.606627 2296 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 01:18:54.606791 kubelet[2296]: I0117 01:18:54.606711 2296 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 01:18:54.623197 kubelet[2296]: I0117 01:18:54.623117 2296 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 01:18:54.624976 kubelet[2296]: I0117 01:18:54.624901 2296 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 01:18:54.626454 kubelet[2296]: I0117 01:18:54.624953 2296 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-3374x.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 01:18:54.626724 kubelet[2296]: I0117 01:18:54.626474 2296 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 01:18:54.626724 kubelet[2296]: I0117 01:18:54.626492 2296 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 01:18:54.626724 kubelet[2296]: I0117 01:18:54.626657 2296 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 01:18:54.628897 kubelet[2296]: I0117 01:18:54.628841 2296 state_mem.go:36] "Initialized new in-memory state store" Jan 17 01:18:54.630532 kubelet[2296]: I0117 01:18:54.630493 2296 kubelet.go:475] "Attempting to sync node with API server" Jan 17 01:18:54.630532 kubelet[2296]: I0117 01:18:54.630530 2296 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 01:18:54.631225 kubelet[2296]: I0117 01:18:54.631177 2296 kubelet.go:387] "Adding apiserver pod source" Jan 17 01:18:54.631307 kubelet[2296]: I0117 01:18:54.631229 2296 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 01:18:54.633003 kubelet[2296]: E0117 01:18:54.632899 2296 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.8.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3374x.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.8.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 01:18:54.634966 kubelet[2296]: I0117 01:18:54.634875 2296 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 01:18:54.636806 kubelet[2296]: I0117 01:18:54.636775 2296 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 01:18:54.636894 kubelet[2296]: I0117 01:18:54.636820 2296 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 01:18:54.641474 kubelet[2296]: W0117 01:18:54.640573 2296 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 01:18:54.646071 kubelet[2296]: E0117 01:18:54.645908 2296 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.8.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.8.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 01:18:54.647627 kubelet[2296]: I0117 01:18:54.646834 2296 server.go:1262] "Started kubelet" Jan 17 01:18:54.649926 kubelet[2296]: I0117 01:18:54.649887 2296 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 01:18:54.659394 kubelet[2296]: E0117 01:18:54.654702 2296 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.8.82:6443/api/v1/namespaces/default/events\": dial tcp 10.244.8.82:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-3374x.gb1.brightbox.com.188b5fe961daa41a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-3374x.gb1.brightbox.com,UID:srv-3374x.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-3374x.gb1.brightbox.com,},FirstTimestamp:2026-01-17 01:18:54.646789146 +0000 UTC m=+0.784847546,LastTimestamp:2026-01-17 01:18:54.646789146 +0000 UTC m=+0.784847546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-3374x.gb1.brightbox.com,}" Jan 17 01:18:54.659394 kubelet[2296]: I0117 01:18:54.658732 2296 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 01:18:54.659956 kubelet[2296]: I0117 01:18:54.659898 2296 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 01:18:54.660976 kubelet[2296]: E0117 01:18:54.660827 2296 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-3374x.gb1.brightbox.com\" not found" Jan 17 01:18:54.662421 kubelet[2296]: I0117 01:18:54.662398 2296 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 01:18:54.663301 kubelet[2296]: I0117 01:18:54.662622 2296 reconciler.go:29] "Reconciler: start to sync state" Jan 17 01:18:54.663843 kubelet[2296]: E0117 01:18:54.663814 2296 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.8.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.8.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 01:18:54.664200 kubelet[2296]: I0117 01:18:54.664169 2296 server.go:310] "Adding debug handlers to kubelet server" Jan 17 01:18:54.666445 kubelet[2296]: I0117 01:18:54.666405 2296 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 01:18:54.666684 kubelet[2296]: I0117 01:18:54.666632 2296 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 01:18:54.670772 kubelet[2296]: I0117 01:18:54.670365 2296 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 01:18:54.670772 kubelet[2296]: I0117 01:18:54.670618 2296 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 01:18:54.672259 kubelet[2296]: I0117 01:18:54.671985 2296 factory.go:223] Registration of the systemd container factory successfully Jan 17 01:18:54.672259 kubelet[2296]: I0117 01:18:54.672110 2296 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 01:18:54.675971 kubelet[2296]: E0117 01:18:54.666938 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.8.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3374x.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.8.82:6443: connect: connection refused" interval="200ms" Jan 17 01:18:54.676932 kubelet[2296]: E0117 01:18:54.676898 2296 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 01:18:54.678431 kubelet[2296]: I0117 01:18:54.678402 2296 factory.go:223] Registration of the containerd container factory successfully Jan 17 01:18:54.695843 kubelet[2296]: I0117 01:18:54.695639 2296 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 01:18:54.697895 kubelet[2296]: I0117 01:18:54.697320 2296 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 01:18:54.697895 kubelet[2296]: I0117 01:18:54.697361 2296 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 01:18:54.697895 kubelet[2296]: I0117 01:18:54.697408 2296 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 01:18:54.697895 kubelet[2296]: E0117 01:18:54.697504 2296 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 01:18:54.708402 kubelet[2296]: E0117 01:18:54.708365 2296 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.8.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.8.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 01:18:54.725571 kubelet[2296]: I0117 01:18:54.725537 2296 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 01:18:54.725848 kubelet[2296]: I0117 01:18:54.725823 2296 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 01:18:54.725913 kubelet[2296]: I0117 01:18:54.725871 2296 state_mem.go:36] "Initialized new in-memory state store" Jan 17 01:18:54.727952 kubelet[2296]: I0117 01:18:54.727931 2296 policy_none.go:49] "None policy: Start" Jan 17 01:18:54.728031 kubelet[2296]: I0117 01:18:54.727969 2296 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 01:18:54.728031 kubelet[2296]: I0117 01:18:54.727989 2296 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 01:18:54.730984 kubelet[2296]: I0117 01:18:54.730944 2296 policy_none.go:47] "Start" Jan 17 01:18:54.739492 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 01:18:54.757630 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 01:18:54.761411 kubelet[2296]: E0117 01:18:54.761376 2296 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-3374x.gb1.brightbox.com\" not found" Jan 17 01:18:54.762627 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 01:18:54.771648 kubelet[2296]: E0117 01:18:54.771583 2296 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 01:18:54.771918 kubelet[2296]: I0117 01:18:54.771879 2296 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 01:18:54.771978 kubelet[2296]: I0117 01:18:54.771916 2296 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 01:18:54.772312 kubelet[2296]: I0117 01:18:54.772288 2296 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 01:18:54.774291 kubelet[2296]: E0117 01:18:54.774248 2296 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 01:18:54.774621 kubelet[2296]: E0117 01:18:54.774358 2296 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-3374x.gb1.brightbox.com\" not found" Jan 17 01:18:54.817370 systemd[1]: Created slice kubepods-burstable-podc87b595b553eaed33d6b891f9cba340d.slice - libcontainer container kubepods-burstable-podc87b595b553eaed33d6b891f9cba340d.slice. Jan 17 01:18:54.827814 kubelet[2296]: E0117 01:18:54.827687 2296 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.832141 systemd[1]: Created slice kubepods-burstable-pod18f6d0bd9d618dd78a48afe079a54b5b.slice - libcontainer container kubepods-burstable-pod18f6d0bd9d618dd78a48afe079a54b5b.slice. Jan 17 01:18:54.845528 kubelet[2296]: E0117 01:18:54.845454 2296 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.850036 systemd[1]: Created slice kubepods-burstable-pod889642b4bf65931fbb8c79fb9b2f14c7.slice - libcontainer container kubepods-burstable-pod889642b4bf65931fbb8c79fb9b2f14c7.slice. Jan 17 01:18:54.853122 kubelet[2296]: E0117 01:18:54.853090 2296 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.863734 kubelet[2296]: I0117 01:18:54.863691 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/889642b4bf65931fbb8c79fb9b2f14c7-kubeconfig\") pod \"kube-scheduler-srv-3374x.gb1.brightbox.com\" (UID: \"889642b4bf65931fbb8c79fb9b2f14c7\") " pod="kube-system/kube-scheduler-srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.863907 kubelet[2296]: I0117 01:18:54.863744 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c87b595b553eaed33d6b891f9cba340d-ca-certs\") pod \"kube-apiserver-srv-3374x.gb1.brightbox.com\" (UID: \"c87b595b553eaed33d6b891f9cba340d\") " pod="kube-system/kube-apiserver-srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.863907 kubelet[2296]: I0117 01:18:54.863795 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/18f6d0bd9d618dd78a48afe079a54b5b-flexvolume-dir\") pod \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" (UID: \"18f6d0bd9d618dd78a48afe079a54b5b\") " pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.863907 kubelet[2296]: I0117 01:18:54.863830 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18f6d0bd9d618dd78a48afe079a54b5b-k8s-certs\") pod \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" (UID: \"18f6d0bd9d618dd78a48afe079a54b5b\") " pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.863907 kubelet[2296]: I0117 01:18:54.863883 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/18f6d0bd9d618dd78a48afe079a54b5b-kubeconfig\") pod \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" (UID: \"18f6d0bd9d618dd78a48afe079a54b5b\") " pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.864089 kubelet[2296]: I0117 01:18:54.863932 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c87b595b553eaed33d6b891f9cba340d-k8s-certs\") pod \"kube-apiserver-srv-3374x.gb1.brightbox.com\" (UID: \"c87b595b553eaed33d6b891f9cba340d\") " pod="kube-system/kube-apiserver-srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.864089 kubelet[2296]: I0117 01:18:54.863959 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c87b595b553eaed33d6b891f9cba340d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3374x.gb1.brightbox.com\" (UID: \"c87b595b553eaed33d6b891f9cba340d\") " pod="kube-system/kube-apiserver-srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.864089 kubelet[2296]: I0117 01:18:54.863987 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18f6d0bd9d618dd78a48afe079a54b5b-ca-certs\") pod \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" (UID: \"18f6d0bd9d618dd78a48afe079a54b5b\") " pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.864089 kubelet[2296]: I0117 01:18:54.864013 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18f6d0bd9d618dd78a48afe079a54b5b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" (UID: \"18f6d0bd9d618dd78a48afe079a54b5b\") " pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.875143 kubelet[2296]: I0117 01:18:54.875111 2296 kubelet_node_status.go:75] "Attempting to register node" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.875592 kubelet[2296]: E0117 01:18:54.875550 2296 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.8.82:6443/api/v1/nodes\": dial tcp 10.244.8.82:6443: connect: connection refused" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:54.877254 kubelet[2296]: E0117 01:18:54.877182 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.8.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3374x.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.8.82:6443: connect: connection refused" interval="400ms" Jan 17 01:18:55.079979 kubelet[2296]: I0117 01:18:55.079551 2296 kubelet_node_status.go:75] "Attempting to register node" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:55.081586 kubelet[2296]: E0117 01:18:55.081469 2296 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.8.82:6443/api/v1/nodes\": dial tcp 10.244.8.82:6443: connect: connection refused" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:55.134392 containerd[1503]: time="2026-01-17T01:18:55.133871384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3374x.gb1.brightbox.com,Uid:c87b595b553eaed33d6b891f9cba340d,Namespace:kube-system,Attempt:0,}" Jan 17 01:18:55.157097 containerd[1503]: time="2026-01-17T01:18:55.156752369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3374x.gb1.brightbox.com,Uid:18f6d0bd9d618dd78a48afe079a54b5b,Namespace:kube-system,Attempt:0,}" Jan 17 01:18:55.158147 containerd[1503]: time="2026-01-17T01:18:55.158115552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3374x.gb1.brightbox.com,Uid:889642b4bf65931fbb8c79fb9b2f14c7,Namespace:kube-system,Attempt:0,}" Jan 17 01:18:55.278035 kubelet[2296]: E0117 01:18:55.277972 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.8.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3374x.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.8.82:6443: connect: connection refused" interval="800ms" Jan 17 01:18:55.485064 kubelet[2296]: I0117 01:18:55.484593 2296 kubelet_node_status.go:75] "Attempting to register node" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:55.485064 kubelet[2296]: E0117 01:18:55.484969 2296 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.8.82:6443/api/v1/nodes\": dial tcp 10.244.8.82:6443: connect: connection refused" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:55.836765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020832848.mount: Deactivated successfully. Jan 17 01:18:55.845152 containerd[1503]: time="2026-01-17T01:18:55.844298428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:18:55.846212 containerd[1503]: time="2026-01-17T01:18:55.846030184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 01:18:55.866232 containerd[1503]: time="2026-01-17T01:18:55.866134055Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:18:55.868767 containerd[1503]: time="2026-01-17T01:18:55.868722520Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 01:18:55.870292 containerd[1503]: time="2026-01-17T01:18:55.869555235Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:18:55.870378 containerd[1503]: time="2026-01-17T01:18:55.870312674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 01:18:55.870887 containerd[1503]: time="2026-01-17T01:18:55.870855091Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:18:55.877355 containerd[1503]: time="2026-01-17T01:18:55.877315011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:18:55.877838 containerd[1503]: time="2026-01-17T01:18:55.877784950Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 719.401741ms" Jan 17 01:18:55.880416 containerd[1503]: time="2026-01-17T01:18:55.880377475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 723.443993ms" Jan 17 01:18:55.883331 containerd[1503]: time="2026-01-17T01:18:55.883059678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 749.041303ms" Jan 17 01:18:55.999107 kubelet[2296]: E0117 01:18:55.998802 2296 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.8.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.8.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 01:18:56.030321 kubelet[2296]: E0117 01:18:56.030208 2296 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.8.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.8.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 01:18:56.080760 kubelet[2296]: E0117 01:18:56.079458 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.8.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3374x.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.8.82:6443: connect: connection refused" interval="1.6s" Jan 17 01:18:56.119051 kubelet[2296]: E0117 01:18:56.118900 2296 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.8.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3374x.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.8.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 01:18:56.129305 containerd[1503]: time="2026-01-17T01:18:56.128948708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:18:56.129305 containerd[1503]: time="2026-01-17T01:18:56.129066072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:18:56.129305 containerd[1503]: time="2026-01-17T01:18:56.129093204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:18:56.131315 containerd[1503]: time="2026-01-17T01:18:56.129884241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:18:56.131315 containerd[1503]: time="2026-01-17T01:18:56.129968956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:18:56.131315 containerd[1503]: time="2026-01-17T01:18:56.129990895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:18:56.131315 containerd[1503]: time="2026-01-17T01:18:56.130118969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:18:56.132519 containerd[1503]: time="2026-01-17T01:18:56.132431651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:18:56.135593 containerd[1503]: time="2026-01-17T01:18:56.135489313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:18:56.136125 containerd[1503]: time="2026-01-17T01:18:56.135577768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:18:56.136125 containerd[1503]: time="2026-01-17T01:18:56.135614304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:18:56.136125 containerd[1503]: time="2026-01-17T01:18:56.135730909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:18:56.173426 kubelet[2296]: E0117 01:18:56.173349 2296 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.8.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.8.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 01:18:56.182523 systemd[1]: Started cri-containerd-8589ce22c3a73eaae86e2ab9b358c11116efd2165bb6e4fde349978bdbbe130a.scope - libcontainer container 8589ce22c3a73eaae86e2ab9b358c11116efd2165bb6e4fde349978bdbbe130a. Jan 17 01:18:56.192873 systemd[1]: Started cri-containerd-f0f6e1c786fd973e644cabbc2d2e958d9b12cea4868a018277d80b852a9acefc.scope - libcontainer container f0f6e1c786fd973e644cabbc2d2e958d9b12cea4868a018277d80b852a9acefc. Jan 17 01:18:56.203063 systemd[1]: Started cri-containerd-39d18240aebfbab7b8a63dd4fc742a2df528ee61264ca13a49accad8c58ca155.scope - libcontainer container 39d18240aebfbab7b8a63dd4fc742a2df528ee61264ca13a49accad8c58ca155. Jan 17 01:18:56.289999 kubelet[2296]: I0117 01:18:56.289813 2296 kubelet_node_status.go:75] "Attempting to register node" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:56.290243 kubelet[2296]: E0117 01:18:56.290195 2296 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.8.82:6443/api/v1/nodes\": dial tcp 10.244.8.82:6443: connect: connection refused" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:56.294059 containerd[1503]: time="2026-01-17T01:18:56.293999379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3374x.gb1.brightbox.com,Uid:c87b595b553eaed33d6b891f9cba340d,Namespace:kube-system,Attempt:0,} returns sandbox id \"39d18240aebfbab7b8a63dd4fc742a2df528ee61264ca13a49accad8c58ca155\"" Jan 17 01:18:56.320107 containerd[1503]: time="2026-01-17T01:18:56.320035188Z" level=info msg="CreateContainer within sandbox \"39d18240aebfbab7b8a63dd4fc742a2df528ee61264ca13a49accad8c58ca155\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 01:18:56.329475 containerd[1503]: time="2026-01-17T01:18:56.329432216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3374x.gb1.brightbox.com,Uid:18f6d0bd9d618dd78a48afe079a54b5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0f6e1c786fd973e644cabbc2d2e958d9b12cea4868a018277d80b852a9acefc\"" Jan 17 01:18:56.336186 containerd[1503]: time="2026-01-17T01:18:56.335993466Z" level=info msg="CreateContainer within sandbox \"f0f6e1c786fd973e644cabbc2d2e958d9b12cea4868a018277d80b852a9acefc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 01:18:56.341153 containerd[1503]: time="2026-01-17T01:18:56.340937583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3374x.gb1.brightbox.com,Uid:889642b4bf65931fbb8c79fb9b2f14c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8589ce22c3a73eaae86e2ab9b358c11116efd2165bb6e4fde349978bdbbe130a\"" Jan 17 01:18:56.347881 containerd[1503]: time="2026-01-17T01:18:56.347755435Z" level=info msg="CreateContainer within sandbox \"8589ce22c3a73eaae86e2ab9b358c11116efd2165bb6e4fde349978bdbbe130a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 01:18:56.350961 containerd[1503]: time="2026-01-17T01:18:56.350771985Z" level=info msg="CreateContainer within sandbox \"39d18240aebfbab7b8a63dd4fc742a2df528ee61264ca13a49accad8c58ca155\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"df188d67277f920bfe9bad6da4caeb1a1fcf36fc62e5757cb56179f43b09e360\"" Jan 17 01:18:56.351483 containerd[1503]: time="2026-01-17T01:18:56.351449902Z" level=info msg="StartContainer for \"df188d67277f920bfe9bad6da4caeb1a1fcf36fc62e5757cb56179f43b09e360\"" Jan 17 01:18:56.358599 containerd[1503]: time="2026-01-17T01:18:56.358550424Z" level=info msg="CreateContainer within sandbox \"f0f6e1c786fd973e644cabbc2d2e958d9b12cea4868a018277d80b852a9acefc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cba50fc2a975fa48fefcd7e10c118408fb123b630b9743518000c405b50ae8bd\"" Jan 17 01:18:56.360150 containerd[1503]: time="2026-01-17T01:18:56.360121119Z" level=info msg="StartContainer for \"cba50fc2a975fa48fefcd7e10c118408fb123b630b9743518000c405b50ae8bd\"" Jan 17 01:18:56.374124 containerd[1503]: time="2026-01-17T01:18:56.373082034Z" level=info msg="CreateContainer within sandbox \"8589ce22c3a73eaae86e2ab9b358c11116efd2165bb6e4fde349978bdbbe130a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"94677ba2fd7b32c8392a4df93e8331d551fd32b5f2c43e0659c4ef612d1c9db9\"" Jan 17 01:18:56.375500 containerd[1503]: time="2026-01-17T01:18:56.375462874Z" level=info msg="StartContainer for \"94677ba2fd7b32c8392a4df93e8331d551fd32b5f2c43e0659c4ef612d1c9db9\"" Jan 17 01:18:56.410032 systemd[1]: Started cri-containerd-df188d67277f920bfe9bad6da4caeb1a1fcf36fc62e5757cb56179f43b09e360.scope - libcontainer container df188d67277f920bfe9bad6da4caeb1a1fcf36fc62e5757cb56179f43b09e360. Jan 17 01:18:56.424618 systemd[1]: Started cri-containerd-cba50fc2a975fa48fefcd7e10c118408fb123b630b9743518000c405b50ae8bd.scope - libcontainer container cba50fc2a975fa48fefcd7e10c118408fb123b630b9743518000c405b50ae8bd. Jan 17 01:18:56.444554 systemd[1]: Started cri-containerd-94677ba2fd7b32c8392a4df93e8331d551fd32b5f2c43e0659c4ef612d1c9db9.scope - libcontainer container 94677ba2fd7b32c8392a4df93e8331d551fd32b5f2c43e0659c4ef612d1c9db9. Jan 17 01:18:56.515292 containerd[1503]: time="2026-01-17T01:18:56.514021396Z" level=info msg="StartContainer for \"df188d67277f920bfe9bad6da4caeb1a1fcf36fc62e5757cb56179f43b09e360\" returns successfully" Jan 17 01:18:56.543925 containerd[1503]: time="2026-01-17T01:18:56.543764689Z" level=info msg="StartContainer for \"cba50fc2a975fa48fefcd7e10c118408fb123b630b9743518000c405b50ae8bd\" returns successfully" Jan 17 01:18:56.568066 containerd[1503]: time="2026-01-17T01:18:56.567584570Z" level=info msg="StartContainer for \"94677ba2fd7b32c8392a4df93e8331d551fd32b5f2c43e0659c4ef612d1c9db9\" returns successfully" Jan 17 01:18:56.735160 kubelet[2296]: E0117 01:18:56.735033 2296 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:56.738502 kubelet[2296]: E0117 01:18:56.738476 2296 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:56.746645 kubelet[2296]: E0117 01:18:56.746610 2296 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:56.768397 kubelet[2296]: E0117 01:18:56.768344 2296 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.8.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.8.82:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 01:18:57.750740 kubelet[2296]: E0117 01:18:57.750578 2296 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:57.751665 kubelet[2296]: E0117 01:18:57.751640 2296 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:57.752064 kubelet[2296]: E0117 01:18:57.752039 2296 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:57.894486 kubelet[2296]: I0117 01:18:57.894448 2296 kubelet_node_status.go:75] "Attempting to register node" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:59.539815 kubelet[2296]: E0117 01:18:59.539773 2296 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:59.827844 kubelet[2296]: E0117 01:18:59.827692 2296 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-3374x.gb1.brightbox.com\" not found" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:59.858858 kubelet[2296]: I0117 01:18:59.858761 2296 kubelet_node_status.go:78] "Successfully registered node" node="srv-3374x.gb1.brightbox.com" Jan 17 01:18:59.862356 kubelet[2296]: I0117 01:18:59.862146 2296 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:18:59.886879 kubelet[2296]: E0117 01:18:59.886828 2296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:18:59.887427 kubelet[2296]: I0117 01:18:59.887128 2296 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-3374x.gb1.brightbox.com" Jan 17 01:18:59.896323 kubelet[2296]: E0117 01:18:59.896233 2296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-3374x.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-3374x.gb1.brightbox.com" Jan 17 01:18:59.896323 kubelet[2296]: I0117 01:18:59.896292 2296 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-3374x.gb1.brightbox.com" Jan 17 01:18:59.900130 kubelet[2296]: E0117 01:18:59.900056 2296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-3374x.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-3374x.gb1.brightbox.com" Jan 17 01:19:00.649625 kubelet[2296]: I0117 01:19:00.649562 2296 apiserver.go:52] "Watching apiserver" Jan 17 01:19:00.663342 kubelet[2296]: I0117 01:19:00.663250 2296 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 01:19:01.905976 systemd[1]: Reloading requested from client PID 2586 ('systemctl') (unit session-11.scope)... Jan 17 01:19:01.906007 systemd[1]: Reloading... Jan 17 01:19:02.044508 zram_generator::config[2628]: No configuration found. Jan 17 01:19:02.219200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:19:02.349875 systemd[1]: Reloading finished in 443 ms. Jan 17 01:19:02.416621 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:19:02.432066 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 01:19:02.432552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:19:02.432647 systemd[1]: kubelet.service: Consumed 1.293s CPU time, 122.5M memory peak, 0B memory swap peak. Jan 17 01:19:02.440757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:19:02.766553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:19:02.779864 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 01:19:02.902768 kubelet[2689]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 01:19:02.904003 kubelet[2689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 01:19:02.906308 kubelet[2689]: I0117 01:19:02.906230 2689 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 01:19:02.923092 kubelet[2689]: I0117 01:19:02.922926 2689 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 01:19:02.923092 kubelet[2689]: I0117 01:19:02.922964 2689 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 01:19:02.925348 kubelet[2689]: I0117 01:19:02.924552 2689 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 01:19:02.925348 kubelet[2689]: I0117 01:19:02.924571 2689 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 01:19:02.925348 kubelet[2689]: I0117 01:19:02.924863 2689 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 01:19:02.926715 kubelet[2689]: I0117 01:19:02.926690 2689 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 01:19:02.930774 kubelet[2689]: I0117 01:19:02.930748 2689 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 01:19:02.955724 kubelet[2689]: E0117 01:19:02.955672 2689 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 01:19:02.955996 kubelet[2689]: I0117 01:19:02.955974 2689 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 01:19:02.963497 kubelet[2689]: I0117 01:19:02.963465 2689 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 01:19:02.969211 kubelet[2689]: I0117 01:19:02.969159 2689 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 01:19:02.969610 kubelet[2689]: I0117 01:19:02.969364 2689 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-3374x.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 01:19:02.969816 kubelet[2689]: I0117 01:19:02.969795 2689 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 01:19:02.969912 kubelet[2689]: I0117 01:19:02.969896 2689 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 01:19:02.970040 kubelet[2689]: I0117 01:19:02.970023 2689 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 01:19:02.971927 kubelet[2689]: I0117 01:19:02.971905 2689 state_mem.go:36] "Initialized new in-memory state store" Jan 17 01:19:02.975713 kubelet[2689]: I0117 01:19:02.975692 2689 kubelet.go:475] "Attempting to sync node with API server" Jan 17 01:19:02.976067 kubelet[2689]: I0117 01:19:02.976046 2689 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 01:19:02.976384 kubelet[2689]: I0117 01:19:02.976343 2689 kubelet.go:387] "Adding apiserver pod source" Jan 17 01:19:02.977040 kubelet[2689]: I0117 01:19:02.977019 2689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 01:19:02.992140 kubelet[2689]: I0117 01:19:02.992071 2689 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 01:19:02.996988 kubelet[2689]: I0117 01:19:02.996961 2689 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 01:19:02.997661 kubelet[2689]: I0117 01:19:02.997406 2689 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 01:19:03.017962 kubelet[2689]: I0117 01:19:03.017854 2689 server.go:1262] "Started kubelet" Jan 17 01:19:03.035996 kubelet[2689]: I0117 01:19:03.035939 2689 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 01:19:03.038060 kubelet[2689]: I0117 01:19:03.037794 2689 server.go:310] "Adding debug handlers to kubelet server" Jan 17 01:19:03.043742 kubelet[2689]: I0117 01:19:03.042493 2689 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 01:19:03.043742 kubelet[2689]: I0117 01:19:03.042572 2689 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 01:19:03.043742 kubelet[2689]: I0117 01:19:03.042937 2689 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 01:19:03.050326 kubelet[2689]: I0117 01:19:03.049087 2689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 01:19:03.068494 kubelet[2689]: I0117 01:19:03.068448 2689 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 01:19:03.076440 kubelet[2689]: I0117 01:19:03.076411 2689 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 01:19:03.081295 kubelet[2689]: I0117 01:19:03.079708 2689 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 01:19:03.081295 kubelet[2689]: I0117 01:19:03.079876 2689 reconciler.go:29] "Reconciler: start to sync state" Jan 17 01:19:03.082866 kubelet[2689]: I0117 01:19:03.082818 2689 factory.go:223] Registration of the systemd container factory successfully Jan 17 01:19:03.083203 kubelet[2689]: I0117 01:19:03.083173 2689 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 01:19:03.093217 kubelet[2689]: I0117 01:19:03.093152 2689 factory.go:223] Registration of the containerd container factory successfully Jan 17 01:19:03.122297 kubelet[2689]: I0117 01:19:03.122232 2689 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 01:19:03.125545 kubelet[2689]: I0117 01:19:03.125515 2689 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 01:19:03.125712 kubelet[2689]: I0117 01:19:03.125692 2689 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 01:19:03.125859 kubelet[2689]: I0117 01:19:03.125839 2689 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 01:19:03.126072 kubelet[2689]: E0117 01:19:03.126047 2689 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 01:19:03.211415 kubelet[2689]: I0117 01:19:03.210151 2689 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 01:19:03.211632 kubelet[2689]: I0117 01:19:03.211609 2689 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 01:19:03.211750 kubelet[2689]: I0117 01:19:03.211733 2689 state_mem.go:36] "Initialized new in-memory state store" Jan 17 01:19:03.212320 kubelet[2689]: I0117 01:19:03.212296 2689 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 01:19:03.212455 kubelet[2689]: I0117 01:19:03.212418 2689 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 01:19:03.212561 kubelet[2689]: I0117 01:19:03.212545 2689 policy_none.go:49] "None policy: Start" Jan 17 01:19:03.212669 kubelet[2689]: I0117 01:19:03.212653 2689 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 01:19:03.212770 kubelet[2689]: I0117 01:19:03.212753 2689 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 01:19:03.213223 kubelet[2689]: I0117 01:19:03.213182 2689 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 01:19:03.213382 kubelet[2689]: I0117 01:19:03.213364 2689 policy_none.go:47] "Start" Jan 17 01:19:03.226080 kubelet[2689]: E0117 01:19:03.226048 2689 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 01:19:03.227319 kubelet[2689]: E0117 01:19:03.226634 2689 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 01:19:03.227713 kubelet[2689]: I0117 01:19:03.227589 2689 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 01:19:03.228554 kubelet[2689]: I0117 01:19:03.228503 2689 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 01:19:03.232121 kubelet[2689]: I0117 01:19:03.231480 2689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 01:19:03.235327 kubelet[2689]: E0117 01:19:03.234688 2689 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 01:19:03.354280 kubelet[2689]: I0117 01:19:03.354047 2689 kubelet_node_status.go:75] "Attempting to register node" node="srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.365924 kubelet[2689]: I0117 01:19:03.365343 2689 kubelet_node_status.go:124] "Node was previously registered" node="srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.365924 kubelet[2689]: I0117 01:19:03.365452 2689 kubelet_node_status.go:78] "Successfully registered node" node="srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.429191 kubelet[2689]: I0117 01:19:03.428993 2689 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.429734 kubelet[2689]: I0117 01:19:03.429631 2689 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.439846 kubelet[2689]: I0117 01:19:03.439068 2689 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.457592 kubelet[2689]: I0117 01:19:03.457550 2689 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 01:19:03.463496 kubelet[2689]: I0117 01:19:03.463390 2689 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 01:19:03.464228 kubelet[2689]: I0117 01:19:03.464014 2689 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 01:19:03.482247 kubelet[2689]: I0117 01:19:03.481781 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c87b595b553eaed33d6b891f9cba340d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3374x.gb1.brightbox.com\" (UID: \"c87b595b553eaed33d6b891f9cba340d\") " pod="kube-system/kube-apiserver-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.482247 kubelet[2689]: I0117 01:19:03.481857 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18f6d0bd9d618dd78a48afe079a54b5b-k8s-certs\") pod \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" (UID: \"18f6d0bd9d618dd78a48afe079a54b5b\") " pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.482247 kubelet[2689]: I0117 01:19:03.481891 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18f6d0bd9d618dd78a48afe079a54b5b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" (UID: \"18f6d0bd9d618dd78a48afe079a54b5b\") " pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.482247 kubelet[2689]: I0117 01:19:03.481919 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18f6d0bd9d618dd78a48afe079a54b5b-ca-certs\") pod \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" (UID: \"18f6d0bd9d618dd78a48afe079a54b5b\") " pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.482247 kubelet[2689]: I0117 01:19:03.481993 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/18f6d0bd9d618dd78a48afe079a54b5b-flexvolume-dir\") pod \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" (UID: \"18f6d0bd9d618dd78a48afe079a54b5b\") " pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.483021 kubelet[2689]: I0117 01:19:03.482041 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/18f6d0bd9d618dd78a48afe079a54b5b-kubeconfig\") pod \"kube-controller-manager-srv-3374x.gb1.brightbox.com\" (UID: \"18f6d0bd9d618dd78a48afe079a54b5b\") " pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.483021 kubelet[2689]: I0117 01:19:03.482085 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/889642b4bf65931fbb8c79fb9b2f14c7-kubeconfig\") pod \"kube-scheduler-srv-3374x.gb1.brightbox.com\" (UID: \"889642b4bf65931fbb8c79fb9b2f14c7\") " pod="kube-system/kube-scheduler-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.483021 kubelet[2689]: I0117 01:19:03.482126 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c87b595b553eaed33d6b891f9cba340d-ca-certs\") pod \"kube-apiserver-srv-3374x.gb1.brightbox.com\" (UID: \"c87b595b553eaed33d6b891f9cba340d\") " pod="kube-system/kube-apiserver-srv-3374x.gb1.brightbox.com" Jan 17 01:19:03.483808 kubelet[2689]: I0117 01:19:03.483718 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c87b595b553eaed33d6b891f9cba340d-k8s-certs\") pod \"kube-apiserver-srv-3374x.gb1.brightbox.com\" (UID: \"c87b595b553eaed33d6b891f9cba340d\") " pod="kube-system/kube-apiserver-srv-3374x.gb1.brightbox.com" Jan 17 01:19:04.006002 kubelet[2689]: I0117 01:19:04.005589 2689 apiserver.go:52] "Watching apiserver" Jan 17 01:19:04.080357 kubelet[2689]: I0117 01:19:04.080286 2689 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 01:19:04.258571 kubelet[2689]: I0117 01:19:04.256794 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-3374x.gb1.brightbox.com" podStartSLOduration=1.256764858 podStartE2EDuration="1.256764858s" podCreationTimestamp="2026-01-17 01:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:19:04.256699154 +0000 UTC m=+1.467199038" watchObservedRunningTime="2026-01-17 01:19:04.256764858 +0000 UTC m=+1.467264744" Jan 17 01:19:04.289375 kubelet[2689]: I0117 01:19:04.289283 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-3374x.gb1.brightbox.com" podStartSLOduration=1.28923746 podStartE2EDuration="1.28923746s" podCreationTimestamp="2026-01-17 01:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:19:04.272342971 +0000 UTC m=+1.482842872" watchObservedRunningTime="2026-01-17 01:19:04.28923746 +0000 UTC m=+1.499737338" Jan 17 01:19:04.301987 kubelet[2689]: I0117 01:19:04.301913 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-3374x.gb1.brightbox.com" podStartSLOduration=1.301892945 podStartE2EDuration="1.301892945s" podCreationTimestamp="2026-01-17 01:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:19:04.290061738 +0000 UTC m=+1.500561651" watchObservedRunningTime="2026-01-17 01:19:04.301892945 +0000 UTC m=+1.512392839" Jan 17 01:19:07.886503 kubelet[2689]: I0117 01:19:07.886458 2689 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 01:19:07.887512 containerd[1503]: time="2026-01-17T01:19:07.887446862Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 01:19:07.889184 kubelet[2689]: I0117 01:19:07.888243 2689 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 01:19:08.743840 systemd[1]: Created slice kubepods-besteffort-pod42863aaf_9daf_4d30_92f5_2862c98512aa.slice - libcontainer container kubepods-besteffort-pod42863aaf_9daf_4d30_92f5_2862c98512aa.slice. Jan 17 01:19:08.820467 kubelet[2689]: I0117 01:19:08.820394 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42863aaf-9daf-4d30-92f5-2862c98512aa-kube-proxy\") pod \"kube-proxy-qz55k\" (UID: \"42863aaf-9daf-4d30-92f5-2862c98512aa\") " pod="kube-system/kube-proxy-qz55k" Jan 17 01:19:08.820467 kubelet[2689]: I0117 01:19:08.820460 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42863aaf-9daf-4d30-92f5-2862c98512aa-lib-modules\") pod \"kube-proxy-qz55k\" (UID: \"42863aaf-9daf-4d30-92f5-2862c98512aa\") " pod="kube-system/kube-proxy-qz55k" Jan 17 01:19:08.821143 kubelet[2689]: I0117 01:19:08.820495 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42863aaf-9daf-4d30-92f5-2862c98512aa-xtables-lock\") pod \"kube-proxy-qz55k\" (UID: \"42863aaf-9daf-4d30-92f5-2862c98512aa\") " pod="kube-system/kube-proxy-qz55k" Jan 17 01:19:08.821143 kubelet[2689]: I0117 01:19:08.820521 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gflj2\" (UniqueName: \"kubernetes.io/projected/42863aaf-9daf-4d30-92f5-2862c98512aa-kube-api-access-gflj2\") pod \"kube-proxy-qz55k\" (UID: \"42863aaf-9daf-4d30-92f5-2862c98512aa\") " pod="kube-system/kube-proxy-qz55k" Jan 17 01:19:09.067320 containerd[1503]: time="2026-01-17T01:19:09.065950257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qz55k,Uid:42863aaf-9daf-4d30-92f5-2862c98512aa,Namespace:kube-system,Attempt:0,}" Jan 17 01:19:09.082910 systemd[1]: Created slice kubepods-besteffort-pod217b1d42_4e95_4343_9f36_51b73eae3237.slice - libcontainer container kubepods-besteffort-pod217b1d42_4e95_4343_9f36_51b73eae3237.slice. Jan 17 01:19:09.117854 containerd[1503]: time="2026-01-17T01:19:09.117543187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:19:09.117854 containerd[1503]: time="2026-01-17T01:19:09.117623882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:19:09.117854 containerd[1503]: time="2026-01-17T01:19:09.117642903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:09.117854 containerd[1503]: time="2026-01-17T01:19:09.117774405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:09.122288 kubelet[2689]: I0117 01:19:09.121986 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/217b1d42-4e95-4343-9f36-51b73eae3237-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-27wsx\" (UID: \"217b1d42-4e95-4343-9f36-51b73eae3237\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-27wsx" Jan 17 01:19:09.124296 kubelet[2689]: I0117 01:19:09.123493 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjw7p\" (UniqueName: \"kubernetes.io/projected/217b1d42-4e95-4343-9f36-51b73eae3237-kube-api-access-cjw7p\") pod \"tigera-operator-65cdcdfd6d-27wsx\" (UID: \"217b1d42-4e95-4343-9f36-51b73eae3237\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-27wsx" Jan 17 01:19:09.154530 systemd[1]: run-containerd-runc-k8s.io-364cc3c5851a72c5e7422ef8652f461c1dc224aab9db739e80a5ab23eb44bdb5-runc.jOyip0.mount: Deactivated successfully. Jan 17 01:19:09.163694 systemd[1]: Started cri-containerd-364cc3c5851a72c5e7422ef8652f461c1dc224aab9db739e80a5ab23eb44bdb5.scope - libcontainer container 364cc3c5851a72c5e7422ef8652f461c1dc224aab9db739e80a5ab23eb44bdb5. Jan 17 01:19:09.209179 containerd[1503]: time="2026-01-17T01:19:09.209119219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qz55k,Uid:42863aaf-9daf-4d30-92f5-2862c98512aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"364cc3c5851a72c5e7422ef8652f461c1dc224aab9db739e80a5ab23eb44bdb5\"" Jan 17 01:19:09.222875 containerd[1503]: time="2026-01-17T01:19:09.222817807Z" level=info msg="CreateContainer within sandbox \"364cc3c5851a72c5e7422ef8652f461c1dc224aab9db739e80a5ab23eb44bdb5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 01:19:09.247096 containerd[1503]: time="2026-01-17T01:19:09.246931320Z" level=info msg="CreateContainer within sandbox \"364cc3c5851a72c5e7422ef8652f461c1dc224aab9db739e80a5ab23eb44bdb5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"681d97924f6739930613520a59738572e77748bd5197bc0a71684df6fca4f4b6\"" Jan 17 01:19:09.248652 containerd[1503]: time="2026-01-17T01:19:09.248542019Z" level=info msg="StartContainer for \"681d97924f6739930613520a59738572e77748bd5197bc0a71684df6fca4f4b6\"" Jan 17 01:19:09.289560 systemd[1]: Started cri-containerd-681d97924f6739930613520a59738572e77748bd5197bc0a71684df6fca4f4b6.scope - libcontainer container 681d97924f6739930613520a59738572e77748bd5197bc0a71684df6fca4f4b6. Jan 17 01:19:09.340969 containerd[1503]: time="2026-01-17T01:19:09.339660943Z" level=info msg="StartContainer for \"681d97924f6739930613520a59738572e77748bd5197bc0a71684df6fca4f4b6\" returns successfully" Jan 17 01:19:09.404709 containerd[1503]: time="2026-01-17T01:19:09.404498641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-27wsx,Uid:217b1d42-4e95-4343-9f36-51b73eae3237,Namespace:tigera-operator,Attempt:0,}" Jan 17 01:19:09.452297 containerd[1503]: time="2026-01-17T01:19:09.452137152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:19:09.452297 containerd[1503]: time="2026-01-17T01:19:09.452228090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:19:09.452879 containerd[1503]: time="2026-01-17T01:19:09.452308919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:09.452879 containerd[1503]: time="2026-01-17T01:19:09.452487594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:09.486800 systemd[1]: Started cri-containerd-a3f0b300d9299454c5dd738f6f0618db377e084fefbb9237a1d9ea156131f1d1.scope - libcontainer container a3f0b300d9299454c5dd738f6f0618db377e084fefbb9237a1d9ea156131f1d1. Jan 17 01:19:09.564439 containerd[1503]: time="2026-01-17T01:19:09.564309543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-27wsx,Uid:217b1d42-4e95-4343-9f36-51b73eae3237,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a3f0b300d9299454c5dd738f6f0618db377e084fefbb9237a1d9ea156131f1d1\"" Jan 17 01:19:09.570614 containerd[1503]: time="2026-01-17T01:19:09.570567020Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 01:19:11.628677 kubelet[2689]: I0117 01:19:11.626884 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qz55k" podStartSLOduration=3.626854044 podStartE2EDuration="3.626854044s" podCreationTimestamp="2026-01-17 01:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:19:10.235568782 +0000 UTC m=+7.446068673" watchObservedRunningTime="2026-01-17 01:19:11.626854044 +0000 UTC m=+8.837353926" Jan 17 01:19:11.697886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159526011.mount: Deactivated successfully. Jan 17 01:19:12.766049 containerd[1503]: time="2026-01-17T01:19:12.765973698Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:12.767806 containerd[1503]: time="2026-01-17T01:19:12.767734710Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 01:19:12.769888 containerd[1503]: time="2026-01-17T01:19:12.769805721Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:12.775463 containerd[1503]: time="2026-01-17T01:19:12.775409788Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:12.778513 containerd[1503]: time="2026-01-17T01:19:12.778437269Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.207819983s" Jan 17 01:19:12.778513 containerd[1503]: time="2026-01-17T01:19:12.778507253Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 01:19:12.788287 containerd[1503]: time="2026-01-17T01:19:12.788225275Z" level=info msg="CreateContainer within sandbox \"a3f0b300d9299454c5dd738f6f0618db377e084fefbb9237a1d9ea156131f1d1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 01:19:12.810960 containerd[1503]: time="2026-01-17T01:19:12.810825388Z" level=info msg="CreateContainer within sandbox \"a3f0b300d9299454c5dd738f6f0618db377e084fefbb9237a1d9ea156131f1d1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af\"" Jan 17 01:19:12.812993 containerd[1503]: time="2026-01-17T01:19:12.812589576Z" level=info msg="StartContainer for \"3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af\"" Jan 17 01:19:12.855199 systemd[1]: run-containerd-runc-k8s.io-3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af-runc.x6pG7A.mount: Deactivated successfully. Jan 17 01:19:12.865930 systemd[1]: Started cri-containerd-3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af.scope - libcontainer container 3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af. Jan 17 01:19:12.913464 containerd[1503]: time="2026-01-17T01:19:12.913340092Z" level=info msg="StartContainer for \"3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af\" returns successfully" Jan 17 01:19:13.235098 kubelet[2689]: I0117 01:19:13.234925 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-27wsx" podStartSLOduration=2.023521385 podStartE2EDuration="5.23490618s" podCreationTimestamp="2026-01-17 01:19:08 +0000 UTC" firstStartedPulling="2026-01-17 01:19:09.568254251 +0000 UTC m=+6.778754123" lastFinishedPulling="2026-01-17 01:19:12.779639041 +0000 UTC m=+9.990138918" observedRunningTime="2026-01-17 01:19:13.223497552 +0000 UTC m=+10.433997449" watchObservedRunningTime="2026-01-17 01:19:13.23490618 +0000 UTC m=+10.445406072" Jan 17 01:19:16.427959 systemd[1]: cri-containerd-3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af.scope: Deactivated successfully. Jan 17 01:19:16.481954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af-rootfs.mount: Deactivated successfully. Jan 17 01:19:16.644660 containerd[1503]: time="2026-01-17T01:19:16.614405252Z" level=info msg="shim disconnected" id=3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af namespace=k8s.io Jan 17 01:19:16.644660 containerd[1503]: time="2026-01-17T01:19:16.644610908Z" level=warning msg="cleaning up after shim disconnected" id=3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af namespace=k8s.io Jan 17 01:19:16.644660 containerd[1503]: time="2026-01-17T01:19:16.644643743Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:19:17.216300 kubelet[2689]: I0117 01:19:17.216040 2689 scope.go:117] "RemoveContainer" containerID="3d67990e4f6f5b40df66f756b179e575ce65e6e191a2551615066304c96669af" Jan 17 01:19:17.227431 containerd[1503]: time="2026-01-17T01:19:17.227051759Z" level=info msg="CreateContainer within sandbox \"a3f0b300d9299454c5dd738f6f0618db377e084fefbb9237a1d9ea156131f1d1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 01:19:17.255292 containerd[1503]: time="2026-01-17T01:19:17.253474483Z" level=info msg="CreateContainer within sandbox \"a3f0b300d9299454c5dd738f6f0618db377e084fefbb9237a1d9ea156131f1d1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7573cda6a8cc2ffc72400fbad276e5a785f370bc9c40f092d7a58096cce19f7e\"" Jan 17 01:19:17.256315 containerd[1503]: time="2026-01-17T01:19:17.256261071Z" level=info msg="StartContainer for \"7573cda6a8cc2ffc72400fbad276e5a785f370bc9c40f092d7a58096cce19f7e\"" Jan 17 01:19:17.375514 systemd[1]: Started cri-containerd-7573cda6a8cc2ffc72400fbad276e5a785f370bc9c40f092d7a58096cce19f7e.scope - libcontainer container 7573cda6a8cc2ffc72400fbad276e5a785f370bc9c40f092d7a58096cce19f7e. Jan 17 01:19:17.466785 containerd[1503]: time="2026-01-17T01:19:17.466347532Z" level=info msg="StartContainer for \"7573cda6a8cc2ffc72400fbad276e5a785f370bc9c40f092d7a58096cce19f7e\" returns successfully" Jan 17 01:19:18.551430 sudo[1758]: pam_unix(sudo:session): session closed for user root Jan 17 01:19:18.644728 sshd[1755]: pam_unix(sshd:session): session closed for user core Jan 17 01:19:18.650038 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Jan 17 01:19:18.652394 systemd[1]: sshd@8-10.244.8.82:22-20.161.92.111:49208.service: Deactivated successfully. Jan 17 01:19:18.656749 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 01:19:18.657189 systemd[1]: session-11.scope: Consumed 9.643s CPU time, 145.6M memory peak, 0B memory swap peak. Jan 17 01:19:18.660085 systemd-logind[1484]: Removed session 11. Jan 17 01:19:28.150015 systemd[1]: Created slice kubepods-besteffort-pod84d6baf5_44bf_4dcc_b5b3_f92b0c943367.slice - libcontainer container kubepods-besteffort-pod84d6baf5_44bf_4dcc_b5b3_f92b0c943367.slice. Jan 17 01:19:28.178079 kubelet[2689]: I0117 01:19:28.175802 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/84d6baf5-44bf-4dcc-b5b3-f92b0c943367-typha-certs\") pod \"calico-typha-7c4855d686-sj52b\" (UID: \"84d6baf5-44bf-4dcc-b5b3-f92b0c943367\") " pod="calico-system/calico-typha-7c4855d686-sj52b" Jan 17 01:19:28.178079 kubelet[2689]: I0117 01:19:28.175889 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84d6baf5-44bf-4dcc-b5b3-f92b0c943367-tigera-ca-bundle\") pod \"calico-typha-7c4855d686-sj52b\" (UID: \"84d6baf5-44bf-4dcc-b5b3-f92b0c943367\") " pod="calico-system/calico-typha-7c4855d686-sj52b" Jan 17 01:19:28.178079 kubelet[2689]: I0117 01:19:28.175982 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb6n5\" (UniqueName: \"kubernetes.io/projected/84d6baf5-44bf-4dcc-b5b3-f92b0c943367-kube-api-access-nb6n5\") pod \"calico-typha-7c4855d686-sj52b\" (UID: \"84d6baf5-44bf-4dcc-b5b3-f92b0c943367\") " pod="calico-system/calico-typha-7c4855d686-sj52b" Jan 17 01:19:28.348441 systemd[1]: Created slice kubepods-besteffort-podce7edc08_88e1_49fc_a779_af24a2289129.slice - libcontainer container kubepods-besteffort-podce7edc08_88e1_49fc_a779_af24a2289129.slice. Jan 17 01:19:28.378237 kubelet[2689]: I0117 01:19:28.378173 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce7edc08-88e1-49fc-a779-af24a2289129-tigera-ca-bundle\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378237 kubelet[2689]: I0117 01:19:28.378244 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce7edc08-88e1-49fc-a779-af24a2289129-xtables-lock\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378566 kubelet[2689]: I0117 01:19:28.378306 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r96l\" (UniqueName: \"kubernetes.io/projected/ce7edc08-88e1-49fc-a779-af24a2289129-kube-api-access-2r96l\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378566 kubelet[2689]: I0117 01:19:28.378368 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ce7edc08-88e1-49fc-a779-af24a2289129-flexvol-driver-host\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378566 kubelet[2689]: I0117 01:19:28.378398 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce7edc08-88e1-49fc-a779-af24a2289129-var-lib-calico\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378566 kubelet[2689]: I0117 01:19:28.378430 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ce7edc08-88e1-49fc-a779-af24a2289129-node-certs\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378566 kubelet[2689]: I0117 01:19:28.378457 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ce7edc08-88e1-49fc-a779-af24a2289129-var-run-calico\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378840 kubelet[2689]: I0117 01:19:28.378483 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ce7edc08-88e1-49fc-a779-af24a2289129-cni-net-dir\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378840 kubelet[2689]: I0117 01:19:28.378511 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce7edc08-88e1-49fc-a779-af24a2289129-lib-modules\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378840 kubelet[2689]: I0117 01:19:28.378553 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ce7edc08-88e1-49fc-a779-af24a2289129-policysync\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378840 kubelet[2689]: I0117 01:19:28.378584 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ce7edc08-88e1-49fc-a779-af24a2289129-cni-bin-dir\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.378840 kubelet[2689]: I0117 01:19:28.378621 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ce7edc08-88e1-49fc-a779-af24a2289129-cni-log-dir\") pod \"calico-node-7tl79\" (UID: \"ce7edc08-88e1-49fc-a779-af24a2289129\") " pod="calico-system/calico-node-7tl79" Jan 17 01:19:28.466952 containerd[1503]: time="2026-01-17T01:19:28.465402994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c4855d686-sj52b,Uid:84d6baf5-44bf-4dcc-b5b3-f92b0c943367,Namespace:calico-system,Attempt:0,}" Jan 17 01:19:28.485758 kubelet[2689]: E0117 01:19:28.484900 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.485758 kubelet[2689]: W0117 01:19:28.485038 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.485758 kubelet[2689]: E0117 01:19:28.485177 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.486340 kubelet[2689]: E0117 01:19:28.486150 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.486340 kubelet[2689]: W0117 01:19:28.486170 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.486340 kubelet[2689]: E0117 01:19:28.486187 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.489533 kubelet[2689]: E0117 01:19:28.489323 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.489533 kubelet[2689]: W0117 01:19:28.489348 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.489533 kubelet[2689]: E0117 01:19:28.489368 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.490489 kubelet[2689]: E0117 01:19:28.490261 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.490489 kubelet[2689]: W0117 01:19:28.490439 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.490971 kubelet[2689]: E0117 01:19:28.490457 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.495696 kubelet[2689]: E0117 01:19:28.495556 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.495696 kubelet[2689]: W0117 01:19:28.495625 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.495696 kubelet[2689]: E0117 01:19:28.495649 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.501999 kubelet[2689]: E0117 01:19:28.501510 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.501999 kubelet[2689]: W0117 01:19:28.501744 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.501999 kubelet[2689]: E0117 01:19:28.501775 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.539105 kubelet[2689]: E0117 01:19:28.538648 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.539105 kubelet[2689]: W0117 01:19:28.538684 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.539105 kubelet[2689]: E0117 01:19:28.538727 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.546244 containerd[1503]: time="2026-01-17T01:19:28.545795799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:19:28.546244 containerd[1503]: time="2026-01-17T01:19:28.545893053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:19:28.546475 containerd[1503]: time="2026-01-17T01:19:28.545928924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:28.546475 containerd[1503]: time="2026-01-17T01:19:28.546145315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:28.566161 kubelet[2689]: E0117 01:19:28.565663 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:19:28.573684 kubelet[2689]: E0117 01:19:28.573413 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.575721 kubelet[2689]: W0117 01:19:28.573445 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.575721 kubelet[2689]: E0117 01:19:28.575363 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.579458 kubelet[2689]: E0117 01:19:28.579415 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.579458 kubelet[2689]: W0117 01:19:28.579446 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.579769 kubelet[2689]: E0117 01:19:28.579475 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.580587 kubelet[2689]: E0117 01:19:28.580557 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.580587 kubelet[2689]: W0117 01:19:28.580581 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.580719 kubelet[2689]: E0117 01:19:28.580598 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.582198 kubelet[2689]: E0117 01:19:28.582169 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.582198 kubelet[2689]: W0117 01:19:28.582193 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.582541 kubelet[2689]: E0117 01:19:28.582211 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.582645 kubelet[2689]: E0117 01:19:28.582607 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.582645 kubelet[2689]: W0117 01:19:28.582624 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.582645 kubelet[2689]: E0117 01:19:28.582641 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.583855 kubelet[2689]: E0117 01:19:28.583823 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.583855 kubelet[2689]: W0117 01:19:28.583843 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.584042 kubelet[2689]: E0117 01:19:28.583859 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.585405 kubelet[2689]: E0117 01:19:28.585367 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.585853 kubelet[2689]: W0117 01:19:28.585407 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.585853 kubelet[2689]: E0117 01:19:28.585425 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.585853 kubelet[2689]: E0117 01:19:28.585736 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.585853 kubelet[2689]: W0117 01:19:28.585749 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.585853 kubelet[2689]: E0117 01:19:28.585785 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.586320 kubelet[2689]: E0117 01:19:28.586133 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.586320 kubelet[2689]: W0117 01:19:28.586148 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.586320 kubelet[2689]: E0117 01:19:28.586163 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.587406 kubelet[2689]: E0117 01:19:28.587363 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.587406 kubelet[2689]: W0117 01:19:28.587402 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.587574 kubelet[2689]: E0117 01:19:28.587419 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.587765 kubelet[2689]: E0117 01:19:28.587743 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.587765 kubelet[2689]: W0117 01:19:28.587761 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.587889 kubelet[2689]: E0117 01:19:28.587776 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.589542 kubelet[2689]: E0117 01:19:28.589519 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.589647 kubelet[2689]: W0117 01:19:28.589559 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.589647 kubelet[2689]: E0117 01:19:28.589581 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.590313 kubelet[2689]: E0117 01:19:28.589940 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.590313 kubelet[2689]: W0117 01:19:28.589954 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.590313 kubelet[2689]: E0117 01:19:28.589969 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.590636 kubelet[2689]: E0117 01:19:28.590495 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.590636 kubelet[2689]: W0117 01:19:28.590626 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.590795 kubelet[2689]: E0117 01:19:28.590644 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.591438 kubelet[2689]: E0117 01:19:28.591414 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.591438 kubelet[2689]: W0117 01:19:28.591433 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.591571 kubelet[2689]: E0117 01:19:28.591468 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.592735 kubelet[2689]: E0117 01:19:28.592616 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.592735 kubelet[2689]: W0117 01:19:28.592642 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.592735 kubelet[2689]: E0117 01:19:28.592660 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.594399 kubelet[2689]: E0117 01:19:28.594371 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.594399 kubelet[2689]: W0117 01:19:28.594395 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.594563 kubelet[2689]: E0117 01:19:28.594415 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.596411 kubelet[2689]: E0117 01:19:28.596381 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.596411 kubelet[2689]: W0117 01:19:28.596409 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.596573 kubelet[2689]: E0117 01:19:28.596430 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.596738 kubelet[2689]: E0117 01:19:28.596715 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.596738 kubelet[2689]: W0117 01:19:28.596734 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.596870 kubelet[2689]: E0117 01:19:28.596750 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.597572 kubelet[2689]: E0117 01:19:28.597442 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.597572 kubelet[2689]: W0117 01:19:28.597466 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.597572 kubelet[2689]: E0117 01:19:28.597482 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.599235 kubelet[2689]: E0117 01:19:28.598039 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.599235 kubelet[2689]: W0117 01:19:28.598055 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.599235 kubelet[2689]: E0117 01:19:28.598070 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.599235 kubelet[2689]: I0117 01:19:28.598098 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4297d\" (UniqueName: \"kubernetes.io/projected/569a36a0-46e6-4752-8b8f-005d85b2712f-kube-api-access-4297d\") pod \"csi-node-driver-zj4mv\" (UID: \"569a36a0-46e6-4752-8b8f-005d85b2712f\") " pod="calico-system/csi-node-driver-zj4mv" Jan 17 01:19:28.600079 kubelet[2689]: E0117 01:19:28.599937 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.600178 kubelet[2689]: W0117 01:19:28.600075 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.600178 kubelet[2689]: E0117 01:19:28.600099 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.600178 kubelet[2689]: I0117 01:19:28.600134 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/569a36a0-46e6-4752-8b8f-005d85b2712f-varrun\") pod \"csi-node-driver-zj4mv\" (UID: \"569a36a0-46e6-4752-8b8f-005d85b2712f\") " pod="calico-system/csi-node-driver-zj4mv" Jan 17 01:19:28.600605 kubelet[2689]: E0117 01:19:28.600583 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.600605 kubelet[2689]: W0117 01:19:28.600604 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.600824 kubelet[2689]: E0117 01:19:28.600621 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.600824 kubelet[2689]: I0117 01:19:28.600744 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/569a36a0-46e6-4752-8b8f-005d85b2712f-kubelet-dir\") pod \"csi-node-driver-zj4mv\" (UID: \"569a36a0-46e6-4752-8b8f-005d85b2712f\") " pod="calico-system/csi-node-driver-zj4mv" Jan 17 01:19:28.602132 kubelet[2689]: E0117 01:19:28.602063 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.602132 kubelet[2689]: W0117 01:19:28.602089 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.602132 kubelet[2689]: E0117 01:19:28.602106 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.602611 kubelet[2689]: E0117 01:19:28.602589 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.602750 kubelet[2689]: W0117 01:19:28.602717 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.602750 kubelet[2689]: E0117 01:19:28.602748 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.603505 kubelet[2689]: E0117 01:19:28.603474 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.603505 kubelet[2689]: W0117 01:19:28.603497 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.604896 kubelet[2689]: E0117 01:19:28.603515 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.604896 kubelet[2689]: I0117 01:19:28.603566 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/569a36a0-46e6-4752-8b8f-005d85b2712f-socket-dir\") pod \"csi-node-driver-zj4mv\" (UID: \"569a36a0-46e6-4752-8b8f-005d85b2712f\") " pod="calico-system/csi-node-driver-zj4mv" Jan 17 01:19:28.604896 kubelet[2689]: E0117 01:19:28.604716 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.604896 kubelet[2689]: W0117 01:19:28.604732 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.604896 kubelet[2689]: E0117 01:19:28.604747 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.606289 kubelet[2689]: E0117 01:19:28.605585 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.606289 kubelet[2689]: W0117 01:19:28.605606 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.606289 kubelet[2689]: E0117 01:19:28.605623 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.606683 kubelet[2689]: E0117 01:19:28.606656 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.606683 kubelet[2689]: W0117 01:19:28.606676 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.606812 kubelet[2689]: E0117 01:19:28.606692 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.607896 kubelet[2689]: E0117 01:19:28.607875 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.607896 kubelet[2689]: W0117 01:19:28.607892 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.608108 kubelet[2689]: E0117 01:19:28.607909 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.610382 kubelet[2689]: E0117 01:19:28.610360 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.610382 kubelet[2689]: W0117 01:19:28.610380 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.610658 kubelet[2689]: E0117 01:19:28.610397 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.610988 kubelet[2689]: E0117 01:19:28.610960 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.610988 kubelet[2689]: W0117 01:19:28.610980 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.611127 kubelet[2689]: E0117 01:19:28.610997 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.612390 kubelet[2689]: E0117 01:19:28.612360 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.612390 kubelet[2689]: W0117 01:19:28.612383 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.612532 kubelet[2689]: E0117 01:19:28.612401 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.612532 kubelet[2689]: I0117 01:19:28.612435 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/569a36a0-46e6-4752-8b8f-005d85b2712f-registration-dir\") pod \"csi-node-driver-zj4mv\" (UID: \"569a36a0-46e6-4752-8b8f-005d85b2712f\") " pod="calico-system/csi-node-driver-zj4mv" Jan 17 01:19:28.612969 kubelet[2689]: E0117 01:19:28.612923 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.612969 kubelet[2689]: W0117 01:19:28.612946 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.612969 kubelet[2689]: E0117 01:19:28.612962 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.614008 kubelet[2689]: E0117 01:19:28.613875 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.614105 kubelet[2689]: W0117 01:19:28.614009 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.614105 kubelet[2689]: E0117 01:19:28.614041 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.640522 systemd[1]: Started cri-containerd-927c300778485cbba91fb8ae056ae3261f089ee2100832b8f58727c0fdfdb177.scope - libcontainer container 927c300778485cbba91fb8ae056ae3261f089ee2100832b8f58727c0fdfdb177. Jan 17 01:19:28.660126 containerd[1503]: time="2026-01-17T01:19:28.659461897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7tl79,Uid:ce7edc08-88e1-49fc-a779-af24a2289129,Namespace:calico-system,Attempt:0,}" Jan 17 01:19:28.714626 kubelet[2689]: E0117 01:19:28.714020 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.714626 kubelet[2689]: W0117 01:19:28.714065 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.714626 kubelet[2689]: E0117 01:19:28.714098 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.715507 kubelet[2689]: E0117 01:19:28.715465 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.715913 kubelet[2689]: W0117 01:19:28.715672 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.715913 kubelet[2689]: E0117 01:19:28.715695 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.716816 kubelet[2689]: E0117 01:19:28.716633 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.716816 kubelet[2689]: W0117 01:19:28.716652 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.716816 kubelet[2689]: E0117 01:19:28.716669 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.717886 kubelet[2689]: E0117 01:19:28.717633 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.717886 kubelet[2689]: W0117 01:19:28.717659 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.717886 kubelet[2689]: E0117 01:19:28.717678 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.720583 kubelet[2689]: E0117 01:19:28.718426 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.720583 kubelet[2689]: W0117 01:19:28.720388 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.720583 kubelet[2689]: E0117 01:19:28.720418 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.720976 kubelet[2689]: E0117 01:19:28.720833 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.720976 kubelet[2689]: W0117 01:19:28.720848 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.720976 kubelet[2689]: E0117 01:19:28.720863 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.726342 kubelet[2689]: E0117 01:19:28.724044 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.726342 kubelet[2689]: W0117 01:19:28.724067 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.726342 kubelet[2689]: E0117 01:19:28.724083 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.727781 containerd[1503]: time="2026-01-17T01:19:28.726227677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:19:28.727781 containerd[1503]: time="2026-01-17T01:19:28.727156627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:19:28.727781 containerd[1503]: time="2026-01-17T01:19:28.727178386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:28.727912 kubelet[2689]: E0117 01:19:28.727432 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.727912 kubelet[2689]: W0117 01:19:28.727448 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.727912 kubelet[2689]: E0117 01:19:28.727471 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.730155 kubelet[2689]: E0117 01:19:28.728480 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.730155 kubelet[2689]: W0117 01:19:28.728499 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.730155 kubelet[2689]: E0117 01:19:28.728515 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.730362 containerd[1503]: time="2026-01-17T01:19:28.727373982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:28.731103 kubelet[2689]: E0117 01:19:28.730732 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.731103 kubelet[2689]: W0117 01:19:28.730752 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.731103 kubelet[2689]: E0117 01:19:28.730769 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.731912 kubelet[2689]: E0117 01:19:28.731662 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.731912 kubelet[2689]: W0117 01:19:28.731681 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.731912 kubelet[2689]: E0117 01:19:28.731697 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.732711 kubelet[2689]: E0117 01:19:28.732460 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.732711 kubelet[2689]: W0117 01:19:28.732480 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.732711 kubelet[2689]: E0117 01:19:28.732495 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.733621 kubelet[2689]: E0117 01:19:28.733257 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.733621 kubelet[2689]: W0117 01:19:28.733390 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.733621 kubelet[2689]: E0117 01:19:28.733409 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.734170 kubelet[2689]: E0117 01:19:28.734148 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.735068 kubelet[2689]: W0117 01:19:28.734350 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.735068 kubelet[2689]: E0117 01:19:28.734376 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.735573 kubelet[2689]: E0117 01:19:28.735399 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.735573 kubelet[2689]: W0117 01:19:28.735418 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.735573 kubelet[2689]: E0117 01:19:28.735451 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.735933 kubelet[2689]: E0117 01:19:28.735912 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.736191 kubelet[2689]: W0117 01:19:28.736027 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.736191 kubelet[2689]: E0117 01:19:28.736070 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.736660 kubelet[2689]: E0117 01:19:28.736503 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.736660 kubelet[2689]: W0117 01:19:28.736522 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.736660 kubelet[2689]: E0117 01:19:28.736538 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.737383 kubelet[2689]: E0117 01:19:28.737307 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.737383 kubelet[2689]: W0117 01:19:28.737326 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.737383 kubelet[2689]: E0117 01:19:28.737344 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.738233 kubelet[2689]: E0117 01:19:28.738106 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.738233 kubelet[2689]: W0117 01:19:28.738151 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.738233 kubelet[2689]: E0117 01:19:28.738170 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.738961 kubelet[2689]: E0117 01:19:28.738821 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.738961 kubelet[2689]: W0117 01:19:28.738840 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.738961 kubelet[2689]: E0117 01:19:28.738884 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.740304 kubelet[2689]: E0117 01:19:28.739939 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.740304 kubelet[2689]: W0117 01:19:28.739959 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.740304 kubelet[2689]: E0117 01:19:28.739975 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.741124 kubelet[2689]: E0117 01:19:28.741102 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.741482 kubelet[2689]: W0117 01:19:28.741259 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.741482 kubelet[2689]: E0117 01:19:28.741362 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.742207 kubelet[2689]: E0117 01:19:28.742089 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.742207 kubelet[2689]: W0117 01:19:28.742108 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.742207 kubelet[2689]: E0117 01:19:28.742139 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.744174 kubelet[2689]: E0117 01:19:28.743956 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.744174 kubelet[2689]: W0117 01:19:28.743977 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.744174 kubelet[2689]: E0117 01:19:28.743995 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.745295 kubelet[2689]: E0117 01:19:28.744699 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.745295 kubelet[2689]: W0117 01:19:28.744727 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.745295 kubelet[2689]: E0117 01:19:28.744745 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.779469 systemd[1]: Started cri-containerd-3c77b526ead17197d701a212fa14c87d6ccf527bd549bff3d931c27353f168a9.scope - libcontainer container 3c77b526ead17197d701a212fa14c87d6ccf527bd549bff3d931c27353f168a9. Jan 17 01:19:28.784404 kubelet[2689]: E0117 01:19:28.784364 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 01:19:28.784404 kubelet[2689]: W0117 01:19:28.784393 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 01:19:28.784759 kubelet[2689]: E0117 01:19:28.784445 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 01:19:28.842242 containerd[1503]: time="2026-01-17T01:19:28.842191782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7tl79,Uid:ce7edc08-88e1-49fc-a779-af24a2289129,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c77b526ead17197d701a212fa14c87d6ccf527bd549bff3d931c27353f168a9\"" Jan 17 01:19:28.847169 containerd[1503]: time="2026-01-17T01:19:28.847077225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 01:19:28.884169 containerd[1503]: time="2026-01-17T01:19:28.884100108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c4855d686-sj52b,Uid:84d6baf5-44bf-4dcc-b5b3-f92b0c943367,Namespace:calico-system,Attempt:0,} returns sandbox id \"927c300778485cbba91fb8ae056ae3261f089ee2100832b8f58727c0fdfdb177\"" Jan 17 01:19:30.128023 kubelet[2689]: E0117 01:19:30.127873 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:19:30.520570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1691869960.mount: Deactivated successfully. Jan 17 01:19:30.715596 containerd[1503]: time="2026-01-17T01:19:30.715101396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:30.721389 containerd[1503]: time="2026-01-17T01:19:30.721308014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 17 01:19:30.730560 containerd[1503]: time="2026-01-17T01:19:30.730470838Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:30.742397 containerd[1503]: time="2026-01-17T01:19:30.742134861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:30.745297 containerd[1503]: time="2026-01-17T01:19:30.743601735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.89647159s" Jan 17 01:19:30.745297 containerd[1503]: time="2026-01-17T01:19:30.743657177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 01:19:30.750330 containerd[1503]: time="2026-01-17T01:19:30.747590894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 01:19:30.756867 containerd[1503]: time="2026-01-17T01:19:30.756805299Z" level=info msg="CreateContainer within sandbox \"3c77b526ead17197d701a212fa14c87d6ccf527bd549bff3d931c27353f168a9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 01:19:30.794754 containerd[1503]: time="2026-01-17T01:19:30.793493758Z" level=info msg="CreateContainer within sandbox \"3c77b526ead17197d701a212fa14c87d6ccf527bd549bff3d931c27353f168a9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9008a3e3699a3a912864bf3d838f0f5614ea4be1cf53dd190b5dfe191f9efce2\"" Jan 17 01:19:30.794754 containerd[1503]: time="2026-01-17T01:19:30.794724477Z" level=info msg="StartContainer for \"9008a3e3699a3a912864bf3d838f0f5614ea4be1cf53dd190b5dfe191f9efce2\"" Jan 17 01:19:30.871501 systemd[1]: Started cri-containerd-9008a3e3699a3a912864bf3d838f0f5614ea4be1cf53dd190b5dfe191f9efce2.scope - libcontainer container 9008a3e3699a3a912864bf3d838f0f5614ea4be1cf53dd190b5dfe191f9efce2. Jan 17 01:19:30.991730 containerd[1503]: time="2026-01-17T01:19:30.991659608Z" level=info msg="StartContainer for \"9008a3e3699a3a912864bf3d838f0f5614ea4be1cf53dd190b5dfe191f9efce2\" returns successfully" Jan 17 01:19:31.030904 systemd[1]: cri-containerd-9008a3e3699a3a912864bf3d838f0f5614ea4be1cf53dd190b5dfe191f9efce2.scope: Deactivated successfully. Jan 17 01:19:31.131030 containerd[1503]: time="2026-01-17T01:19:31.130764629Z" level=info msg="shim disconnected" id=9008a3e3699a3a912864bf3d838f0f5614ea4be1cf53dd190b5dfe191f9efce2 namespace=k8s.io Jan 17 01:19:31.131030 containerd[1503]: time="2026-01-17T01:19:31.131013093Z" level=warning msg="cleaning up after shim disconnected" id=9008a3e3699a3a912864bf3d838f0f5614ea4be1cf53dd190b5dfe191f9efce2 namespace=k8s.io Jan 17 01:19:31.131030 containerd[1503]: time="2026-01-17T01:19:31.131039931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:19:31.448623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9008a3e3699a3a912864bf3d838f0f5614ea4be1cf53dd190b5dfe191f9efce2-rootfs.mount: Deactivated successfully. Jan 17 01:19:32.126985 kubelet[2689]: E0117 01:19:32.126883 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:19:34.127129 kubelet[2689]: E0117 01:19:34.127010 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:19:36.126898 kubelet[2689]: E0117 01:19:36.126836 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:19:36.246570 containerd[1503]: time="2026-01-17T01:19:36.246514023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:36.248300 containerd[1503]: time="2026-01-17T01:19:36.248225034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 17 01:19:36.248988 containerd[1503]: time="2026-01-17T01:19:36.248927967Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:36.252838 containerd[1503]: time="2026-01-17T01:19:36.252481328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:36.253687 containerd[1503]: time="2026-01-17T01:19:36.253646684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 5.506002711s" Jan 17 01:19:36.253790 containerd[1503]: time="2026-01-17T01:19:36.253690926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 01:19:36.256037 containerd[1503]: time="2026-01-17T01:19:36.256005918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 01:19:36.311255 containerd[1503]: time="2026-01-17T01:19:36.311065298Z" level=info msg="CreateContainer within sandbox \"927c300778485cbba91fb8ae056ae3261f089ee2100832b8f58727c0fdfdb177\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 01:19:36.362514 containerd[1503]: time="2026-01-17T01:19:36.362453906Z" level=info msg="CreateContainer within sandbox \"927c300778485cbba91fb8ae056ae3261f089ee2100832b8f58727c0fdfdb177\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eef8727b02e292ba37645ccf3fea4949ee6304e98ef21f1440b98deac431ceda\"" Jan 17 01:19:36.364179 containerd[1503]: time="2026-01-17T01:19:36.364129750Z" level=info msg="StartContainer for \"eef8727b02e292ba37645ccf3fea4949ee6304e98ef21f1440b98deac431ceda\"" Jan 17 01:19:36.440523 systemd[1]: Started cri-containerd-eef8727b02e292ba37645ccf3fea4949ee6304e98ef21f1440b98deac431ceda.scope - libcontainer container eef8727b02e292ba37645ccf3fea4949ee6304e98ef21f1440b98deac431ceda. Jan 17 01:19:36.567950 containerd[1503]: time="2026-01-17T01:19:36.567891634Z" level=info msg="StartContainer for \"eef8727b02e292ba37645ccf3fea4949ee6304e98ef21f1440b98deac431ceda\" returns successfully" Jan 17 01:19:37.433084 kubelet[2689]: I0117 01:19:37.432986 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c4855d686-sj52b" podStartSLOduration=2.064684997 podStartE2EDuration="9.432962377s" podCreationTimestamp="2026-01-17 01:19:28 +0000 UTC" firstStartedPulling="2026-01-17 01:19:28.887441102 +0000 UTC m=+26.097940979" lastFinishedPulling="2026-01-17 01:19:36.255718474 +0000 UTC m=+33.466218359" observedRunningTime="2026-01-17 01:19:37.414452389 +0000 UTC m=+34.624952296" watchObservedRunningTime="2026-01-17 01:19:37.432962377 +0000 UTC m=+34.643462257" Jan 17 01:19:38.127577 kubelet[2689]: E0117 01:19:38.127464 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:19:40.129639 kubelet[2689]: E0117 01:19:40.127200 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:19:42.128146 kubelet[2689]: E0117 01:19:42.127421 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:19:42.357389 containerd[1503]: time="2026-01-17T01:19:42.357305030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:42.359712 containerd[1503]: time="2026-01-17T01:19:42.359560930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 01:19:42.364530 containerd[1503]: time="2026-01-17T01:19:42.364443944Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:42.369120 containerd[1503]: time="2026-01-17T01:19:42.368168771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:42.371739 containerd[1503]: time="2026-01-17T01:19:42.370974203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.11492296s" Jan 17 01:19:42.371739 containerd[1503]: time="2026-01-17T01:19:42.371021333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 01:19:42.382165 containerd[1503]: time="2026-01-17T01:19:42.381612058Z" level=info msg="CreateContainer within sandbox \"3c77b526ead17197d701a212fa14c87d6ccf527bd549bff3d931c27353f168a9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 01:19:42.412087 containerd[1503]: time="2026-01-17T01:19:42.412000472Z" level=info msg="CreateContainer within sandbox \"3c77b526ead17197d701a212fa14c87d6ccf527bd549bff3d931c27353f168a9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9ba65520cf982fca42cdcb3b90d887348297a3a7481f414aa264109d0ccc8585\"" Jan 17 01:19:42.413102 containerd[1503]: time="2026-01-17T01:19:42.413028761Z" level=info msg="StartContainer for \"9ba65520cf982fca42cdcb3b90d887348297a3a7481f414aa264109d0ccc8585\"" Jan 17 01:19:42.481836 systemd[1]: Started cri-containerd-9ba65520cf982fca42cdcb3b90d887348297a3a7481f414aa264109d0ccc8585.scope - libcontainer container 9ba65520cf982fca42cdcb3b90d887348297a3a7481f414aa264109d0ccc8585. Jan 17 01:19:42.531071 containerd[1503]: time="2026-01-17T01:19:42.531017703Z" level=info msg="StartContainer for \"9ba65520cf982fca42cdcb3b90d887348297a3a7481f414aa264109d0ccc8585\" returns successfully" Jan 17 01:19:43.545847 systemd[1]: cri-containerd-9ba65520cf982fca42cdcb3b90d887348297a3a7481f414aa264109d0ccc8585.scope: Deactivated successfully. Jan 17 01:19:43.580369 kubelet[2689]: I0117 01:19:43.579243 2689 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 01:19:43.596837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ba65520cf982fca42cdcb3b90d887348297a3a7481f414aa264109d0ccc8585-rootfs.mount: Deactivated successfully. Jan 17 01:19:43.700662 containerd[1503]: time="2026-01-17T01:19:43.700537546Z" level=info msg="shim disconnected" id=9ba65520cf982fca42cdcb3b90d887348297a3a7481f414aa264109d0ccc8585 namespace=k8s.io Jan 17 01:19:43.701315 containerd[1503]: time="2026-01-17T01:19:43.700675549Z" level=warning msg="cleaning up after shim disconnected" id=9ba65520cf982fca42cdcb3b90d887348297a3a7481f414aa264109d0ccc8585 namespace=k8s.io Jan 17 01:19:43.701315 containerd[1503]: time="2026-01-17T01:19:43.700695501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:19:43.755520 systemd[1]: Created slice kubepods-burstable-pod26acde11_c245_44e9_abdd_2ebc74cfcad2.slice - libcontainer container kubepods-burstable-pod26acde11_c245_44e9_abdd_2ebc74cfcad2.slice. Jan 17 01:19:43.758201 kubelet[2689]: E0117 01:19:43.758154 2689 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-3374x.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-3374x.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"calico-apiserver\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jan 17 01:19:43.763585 kubelet[2689]: E0117 01:19:43.763417 2689 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:srv-3374x.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-3374x.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"calico-apiserver\"/\"calico-apiserver-certs\"" type="*v1.Secret" Jan 17 01:19:43.771426 systemd[1]: Created slice kubepods-burstable-pod612dd22b_6369_4c99_85b1_59da9f6da310.slice - libcontainer container kubepods-burstable-pod612dd22b_6369_4c99_85b1_59da9f6da310.slice. Jan 17 01:19:43.788053 systemd[1]: Created slice kubepods-besteffort-pod94302249_2c45_4fa9_a8ed_6356d7141062.slice - libcontainer container kubepods-besteffort-pod94302249_2c45_4fa9_a8ed_6356d7141062.slice. Jan 17 01:19:43.806940 systemd[1]: Created slice kubepods-besteffort-pod5fbd3817_1243_49f3_ac44_7bd3e32af698.slice - libcontainer container kubepods-besteffort-pod5fbd3817_1243_49f3_ac44_7bd3e32af698.slice. Jan 17 01:19:43.821606 systemd[1]: Created slice kubepods-besteffort-pod3996ebc6_9eb5_4686_84b4_4c62f64f0ca5.slice - libcontainer container kubepods-besteffort-pod3996ebc6_9eb5_4686_84b4_4c62f64f0ca5.slice. Jan 17 01:19:43.834748 systemd[1]: Created slice kubepods-besteffort-pode0fcbf88_1dad_4938_9ddc_ff2aaa8588a0.slice - libcontainer container kubepods-besteffort-pode0fcbf88_1dad_4938_9ddc_ff2aaa8588a0.slice. Jan 17 01:19:43.847001 systemd[1]: Created slice kubepods-besteffort-pod722a4a78_bbcc_4e35_a380_cd81c1aedcd6.slice - libcontainer container kubepods-besteffort-pod722a4a78_bbcc_4e35_a380_cd81c1aedcd6.slice. Jan 17 01:19:43.854785 kubelet[2689]: I0117 01:19:43.854746 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722a4a78-bbcc-4e35-a380-cd81c1aedcd6-tigera-ca-bundle\") pod \"calico-kube-controllers-796c9cbbf8-tx6d5\" (UID: \"722a4a78-bbcc-4e35-a380-cd81c1aedcd6\") " pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" Jan 17 01:19:43.855708 kubelet[2689]: I0117 01:19:43.854864 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03cd3dbd-5e1b-4532-9c89-eb080f7c53df-config\") pod \"goldmane-7c778bb748-trqpb\" (UID: \"03cd3dbd-5e1b-4532-9c89-eb080f7c53df\") " pod="calico-system/goldmane-7c778bb748-trqpb" Jan 17 01:19:43.855708 kubelet[2689]: I0117 01:19:43.855077 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/03cd3dbd-5e1b-4532-9c89-eb080f7c53df-goldmane-key-pair\") pod \"goldmane-7c778bb748-trqpb\" (UID: \"03cd3dbd-5e1b-4532-9c89-eb080f7c53df\") " pod="calico-system/goldmane-7c778bb748-trqpb" Jan 17 01:19:43.855708 kubelet[2689]: I0117 01:19:43.855283 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z85jb\" (UniqueName: \"kubernetes.io/projected/e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0-kube-api-access-z85jb\") pod \"calico-apiserver-6b97d9fcf7-cbr65\" (UID: \"e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0\") " pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" Jan 17 01:19:43.855708 kubelet[2689]: I0117 01:19:43.855363 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3996ebc6-9eb5-4686-84b4-4c62f64f0ca5-calico-apiserver-certs\") pod \"calico-apiserver-8559fb66ff-gwd8z\" (UID: \"3996ebc6-9eb5-4686-84b4-4c62f64f0ca5\") " pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" Jan 17 01:19:43.856706 kubelet[2689]: I0117 01:19:43.855960 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ml2\" (UniqueName: \"kubernetes.io/projected/5fbd3817-1243-49f3-ac44-7bd3e32af698-kube-api-access-v2ml2\") pod \"whisker-5665f4c7d8-97hgh\" (UID: \"5fbd3817-1243-49f3-ac44-7bd3e32af698\") " pod="calico-system/whisker-5665f4c7d8-97hgh" Jan 17 01:19:43.856706 kubelet[2689]: I0117 01:19:43.856154 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fbd3817-1243-49f3-ac44-7bd3e32af698-whisker-backend-key-pair\") pod \"whisker-5665f4c7d8-97hgh\" (UID: \"5fbd3817-1243-49f3-ac44-7bd3e32af698\") " pod="calico-system/whisker-5665f4c7d8-97hgh" Jan 17 01:19:43.856706 kubelet[2689]: I0117 01:19:43.856220 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03cd3dbd-5e1b-4532-9c89-eb080f7c53df-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-trqpb\" (UID: \"03cd3dbd-5e1b-4532-9c89-eb080f7c53df\") " pod="calico-system/goldmane-7c778bb748-trqpb" Jan 17 01:19:43.856706 kubelet[2689]: I0117 01:19:43.856404 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/612dd22b-6369-4c99-85b1-59da9f6da310-config-volume\") pod \"coredns-66bc5c9577-fj97p\" (UID: \"612dd22b-6369-4c99-85b1-59da9f6da310\") " pod="kube-system/coredns-66bc5c9577-fj97p" Jan 17 01:19:43.856706 kubelet[2689]: I0117 01:19:43.856449 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fbd3817-1243-49f3-ac44-7bd3e32af698-whisker-ca-bundle\") pod \"whisker-5665f4c7d8-97hgh\" (UID: \"5fbd3817-1243-49f3-ac44-7bd3e32af698\") " pod="calico-system/whisker-5665f4c7d8-97hgh" Jan 17 01:19:43.856981 kubelet[2689]: I0117 01:19:43.856510 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9jgb\" (UniqueName: \"kubernetes.io/projected/722a4a78-bbcc-4e35-a380-cd81c1aedcd6-kube-api-access-j9jgb\") pod \"calico-kube-controllers-796c9cbbf8-tx6d5\" (UID: \"722a4a78-bbcc-4e35-a380-cd81c1aedcd6\") " pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" Jan 17 01:19:43.858394 kubelet[2689]: I0117 01:19:43.857402 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bplmk\" (UniqueName: \"kubernetes.io/projected/03cd3dbd-5e1b-4532-9c89-eb080f7c53df-kube-api-access-bplmk\") pod \"goldmane-7c778bb748-trqpb\" (UID: \"03cd3dbd-5e1b-4532-9c89-eb080f7c53df\") " pod="calico-system/goldmane-7c778bb748-trqpb" Jan 17 01:19:43.858394 kubelet[2689]: I0117 01:19:43.857836 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26acde11-c245-44e9-abdd-2ebc74cfcad2-config-volume\") pod \"coredns-66bc5c9577-x4jzn\" (UID: \"26acde11-c245-44e9-abdd-2ebc74cfcad2\") " pod="kube-system/coredns-66bc5c9577-x4jzn" Jan 17 01:19:43.858394 kubelet[2689]: I0117 01:19:43.857877 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qh5b\" (UniqueName: \"kubernetes.io/projected/26acde11-c245-44e9-abdd-2ebc74cfcad2-kube-api-access-5qh5b\") pod \"coredns-66bc5c9577-x4jzn\" (UID: \"26acde11-c245-44e9-abdd-2ebc74cfcad2\") " pod="kube-system/coredns-66bc5c9577-x4jzn" Jan 17 01:19:43.858394 kubelet[2689]: I0117 01:19:43.857959 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldqls\" (UniqueName: \"kubernetes.io/projected/612dd22b-6369-4c99-85b1-59da9f6da310-kube-api-access-ldqls\") pod \"coredns-66bc5c9577-fj97p\" (UID: \"612dd22b-6369-4c99-85b1-59da9f6da310\") " pod="kube-system/coredns-66bc5c9577-fj97p" Jan 17 01:19:43.858394 kubelet[2689]: I0117 01:19:43.858019 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0-calico-apiserver-certs\") pod \"calico-apiserver-6b97d9fcf7-cbr65\" (UID: \"e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0\") " pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" Jan 17 01:19:43.859089 kubelet[2689]: I0117 01:19:43.858104 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksghv\" (UniqueName: \"kubernetes.io/projected/94302249-2c45-4fa9-a8ed-6356d7141062-kube-api-access-ksghv\") pod \"calico-apiserver-6b97d9fcf7-9dv76\" (UID: \"94302249-2c45-4fa9-a8ed-6356d7141062\") " pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" Jan 17 01:19:43.859089 kubelet[2689]: I0117 01:19:43.858834 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94302249-2c45-4fa9-a8ed-6356d7141062-calico-apiserver-certs\") pod \"calico-apiserver-6b97d9fcf7-9dv76\" (UID: \"94302249-2c45-4fa9-a8ed-6356d7141062\") " pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" Jan 17 01:19:43.859995 kubelet[2689]: I0117 01:19:43.859308 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr7x7\" (UniqueName: \"kubernetes.io/projected/3996ebc6-9eb5-4686-84b4-4c62f64f0ca5-kube-api-access-gr7x7\") pod \"calico-apiserver-8559fb66ff-gwd8z\" (UID: \"3996ebc6-9eb5-4686-84b4-4c62f64f0ca5\") " pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" Jan 17 01:19:43.870490 systemd[1]: Created slice kubepods-besteffort-pod03cd3dbd_5e1b_4532_9c89_eb080f7c53df.slice - libcontainer container kubepods-besteffort-pod03cd3dbd_5e1b_4532_9c89_eb080f7c53df.slice. Jan 17 01:19:44.066873 containerd[1503]: time="2026-01-17T01:19:44.066765763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x4jzn,Uid:26acde11-c245-44e9-abdd-2ebc74cfcad2,Namespace:kube-system,Attempt:0,}" Jan 17 01:19:44.086796 containerd[1503]: time="2026-01-17T01:19:44.086002265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fj97p,Uid:612dd22b-6369-4c99-85b1-59da9f6da310,Namespace:kube-system,Attempt:0,}" Jan 17 01:19:44.131426 containerd[1503]: time="2026-01-17T01:19:44.130433577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5665f4c7d8-97hgh,Uid:5fbd3817-1243-49f3-ac44-7bd3e32af698,Namespace:calico-system,Attempt:0,}" Jan 17 01:19:44.140867 systemd[1]: Created slice kubepods-besteffort-pod569a36a0_46e6_4752_8b8f_005d85b2712f.slice - libcontainer container kubepods-besteffort-pod569a36a0_46e6_4752_8b8f_005d85b2712f.slice. Jan 17 01:19:44.153962 containerd[1503]: time="2026-01-17T01:19:44.153908877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zj4mv,Uid:569a36a0-46e6-4752-8b8f-005d85b2712f,Namespace:calico-system,Attempt:0,}" Jan 17 01:19:44.177703 containerd[1503]: time="2026-01-17T01:19:44.177649249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796c9cbbf8-tx6d5,Uid:722a4a78-bbcc-4e35-a380-cd81c1aedcd6,Namespace:calico-system,Attempt:0,}" Jan 17 01:19:44.185065 containerd[1503]: time="2026-01-17T01:19:44.185026173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-trqpb,Uid:03cd3dbd-5e1b-4532-9c89-eb080f7c53df,Namespace:calico-system,Attempt:0,}" Jan 17 01:19:44.432964 containerd[1503]: time="2026-01-17T01:19:44.432786374Z" level=error msg="Failed to destroy network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.433661 containerd[1503]: time="2026-01-17T01:19:44.433330004Z" level=error msg="encountered an error cleaning up failed sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.433661 containerd[1503]: time="2026-01-17T01:19:44.433405404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x4jzn,Uid:26acde11-c245-44e9-abdd-2ebc74cfcad2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.442607 kubelet[2689]: E0117 01:19:44.441304 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.442607 kubelet[2689]: E0117 01:19:44.441834 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x4jzn" Jan 17 01:19:44.442607 kubelet[2689]: E0117 01:19:44.441885 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x4jzn" Jan 17 01:19:44.442876 kubelet[2689]: E0117 01:19:44.441970 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-x4jzn_kube-system(26acde11-c245-44e9-abdd-2ebc74cfcad2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-x4jzn_kube-system(26acde11-c245-44e9-abdd-2ebc74cfcad2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x4jzn" podUID="26acde11-c245-44e9-abdd-2ebc74cfcad2" Jan 17 01:19:44.458222 kubelet[2689]: I0117 01:19:44.458179 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:19:44.500594 containerd[1503]: time="2026-01-17T01:19:44.496704458Z" level=error msg="Failed to destroy network for sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.500594 containerd[1503]: time="2026-01-17T01:19:44.497183833Z" level=error msg="encountered an error cleaning up failed sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.500594 containerd[1503]: time="2026-01-17T01:19:44.497253510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zj4mv,Uid:569a36a0-46e6-4752-8b8f-005d85b2712f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.500889 kubelet[2689]: E0117 01:19:44.498784 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.500889 kubelet[2689]: E0117 01:19:44.498849 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zj4mv" Jan 17 01:19:44.500889 kubelet[2689]: E0117 01:19:44.498879 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zj4mv" Jan 17 01:19:44.501050 kubelet[2689]: E0117 01:19:44.498979 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zj4mv_calico-system(569a36a0-46e6-4752-8b8f-005d85b2712f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zj4mv_calico-system(569a36a0-46e6-4752-8b8f-005d85b2712f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:19:44.512314 containerd[1503]: time="2026-01-17T01:19:44.511194009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 01:19:44.526040 containerd[1503]: time="2026-01-17T01:19:44.525403593Z" level=info msg="StopPodSandbox for \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\"" Jan 17 01:19:44.548016 containerd[1503]: time="2026-01-17T01:19:44.547956377Z" level=error msg="Failed to destroy network for sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.549014 containerd[1503]: time="2026-01-17T01:19:44.548778816Z" level=error msg="encountered an error cleaning up failed sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.549014 containerd[1503]: time="2026-01-17T01:19:44.548848531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796c9cbbf8-tx6d5,Uid:722a4a78-bbcc-4e35-a380-cd81c1aedcd6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.549218 kubelet[2689]: E0117 01:19:44.549156 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.549340 kubelet[2689]: E0117 01:19:44.549243 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" Jan 17 01:19:44.549399 kubelet[2689]: E0117 01:19:44.549330 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" Jan 17 01:19:44.549838 kubelet[2689]: E0117 01:19:44.549442 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-796c9cbbf8-tx6d5_calico-system(722a4a78-bbcc-4e35-a380-cd81c1aedcd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-796c9cbbf8-tx6d5_calico-system(722a4a78-bbcc-4e35-a380-cd81c1aedcd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" podUID="722a4a78-bbcc-4e35-a380-cd81c1aedcd6" Jan 17 01:19:44.557668 containerd[1503]: time="2026-01-17T01:19:44.557603041Z" level=info msg="Ensure that sandbox 4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561 in task-service has been cleanup successfully" Jan 17 01:19:44.587431 containerd[1503]: time="2026-01-17T01:19:44.587193103Z" level=error msg="Failed to destroy network for sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.587766 containerd[1503]: time="2026-01-17T01:19:44.587702520Z" level=error msg="encountered an error cleaning up failed sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.588198 containerd[1503]: time="2026-01-17T01:19:44.587770618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fj97p,Uid:612dd22b-6369-4c99-85b1-59da9f6da310,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.588445 kubelet[2689]: E0117 01:19:44.588124 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.588445 kubelet[2689]: E0117 01:19:44.588230 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fj97p" Jan 17 01:19:44.588445 kubelet[2689]: E0117 01:19:44.588430 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fj97p" Jan 17 01:19:44.590179 kubelet[2689]: E0117 01:19:44.588539 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fj97p_kube-system(612dd22b-6369-4c99-85b1-59da9f6da310)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fj97p_kube-system(612dd22b-6369-4c99-85b1-59da9f6da310)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fj97p" podUID="612dd22b-6369-4c99-85b1-59da9f6da310" Jan 17 01:19:44.612673 containerd[1503]: time="2026-01-17T01:19:44.612200083Z" level=error msg="Failed to destroy network for sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.613129 containerd[1503]: time="2026-01-17T01:19:44.613071266Z" level=error msg="encountered an error cleaning up failed sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.613296 containerd[1503]: time="2026-01-17T01:19:44.613184568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-trqpb,Uid:03cd3dbd-5e1b-4532-9c89-eb080f7c53df,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.619165 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815-shm.mount: Deactivated successfully. Jan 17 01:19:44.625136 kubelet[2689]: E0117 01:19:44.620521 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.625136 kubelet[2689]: E0117 01:19:44.620605 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-trqpb" Jan 17 01:19:44.625136 kubelet[2689]: E0117 01:19:44.620641 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-trqpb" Jan 17 01:19:44.625408 kubelet[2689]: E0117 01:19:44.620720 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-trqpb_calico-system(03cd3dbd-5e1b-4532-9c89-eb080f7c53df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-trqpb_calico-system(03cd3dbd-5e1b-4532-9c89-eb080f7c53df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df" Jan 17 01:19:44.629455 containerd[1503]: time="2026-01-17T01:19:44.629399088Z" level=error msg="Failed to destroy network for sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.633237 containerd[1503]: time="2026-01-17T01:19:44.633183208Z" level=error msg="encountered an error cleaning up failed sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.633349 containerd[1503]: time="2026-01-17T01:19:44.633298071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5665f4c7d8-97hgh,Uid:5fbd3817-1243-49f3-ac44-7bd3e32af698,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.635355 kubelet[2689]: E0117 01:19:44.634558 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.635355 kubelet[2689]: E0117 01:19:44.634631 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5665f4c7d8-97hgh" Jan 17 01:19:44.635355 kubelet[2689]: E0117 01:19:44.634692 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5665f4c7d8-97hgh" Jan 17 01:19:44.635945 kubelet[2689]: E0117 01:19:44.634765 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5665f4c7d8-97hgh_calico-system(5fbd3817-1243-49f3-ac44-7bd3e32af698)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5665f4c7d8-97hgh_calico-system(5fbd3817-1243-49f3-ac44-7bd3e32af698)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5665f4c7d8-97hgh" podUID="5fbd3817-1243-49f3-ac44-7bd3e32af698" Jan 17 01:19:44.636116 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9-shm.mount: Deactivated successfully. Jan 17 01:19:44.671291 containerd[1503]: time="2026-01-17T01:19:44.671202513Z" level=error msg="StopPodSandbox for \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\" failed" error="failed to destroy network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:44.671684 kubelet[2689]: E0117 01:19:44.671618 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:19:44.671784 kubelet[2689]: E0117 01:19:44.671704 2689 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561"} Jan 17 01:19:44.671850 kubelet[2689]: E0117 01:19:44.671798 2689 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26acde11-c245-44e9-abdd-2ebc74cfcad2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:19:44.671958 kubelet[2689]: E0117 01:19:44.671835 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26acde11-c245-44e9-abdd-2ebc74cfcad2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x4jzn" podUID="26acde11-c245-44e9-abdd-2ebc74cfcad2" Jan 17 01:19:44.963120 kubelet[2689]: E0117 01:19:44.962247 2689 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 17 01:19:44.963120 kubelet[2689]: E0117 01:19:44.962446 2689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94302249-2c45-4fa9-a8ed-6356d7141062-calico-apiserver-certs podName:94302249-2c45-4fa9-a8ed-6356d7141062 nodeName:}" failed. No retries permitted until 2026-01-17 01:19:45.462409221 +0000 UTC m=+42.672909102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/94302249-2c45-4fa9-a8ed-6356d7141062-calico-apiserver-certs") pod "calico-apiserver-6b97d9fcf7-9dv76" (UID: "94302249-2c45-4fa9-a8ed-6356d7141062") : failed to sync secret cache: timed out waiting for the condition Jan 17 01:19:44.963120 kubelet[2689]: E0117 01:19:44.962251 2689 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 17 01:19:44.963120 kubelet[2689]: E0117 01:19:44.962793 2689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0-calico-apiserver-certs podName:e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0 nodeName:}" failed. No retries permitted until 2026-01-17 01:19:45.462778748 +0000 UTC m=+42.673278625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0-calico-apiserver-certs") pod "calico-apiserver-6b97d9fcf7-cbr65" (UID: "e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0") : failed to sync secret cache: timed out waiting for the condition Jan 17 01:19:44.969665 kubelet[2689]: E0117 01:19:44.969553 2689 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 17 01:19:44.969665 kubelet[2689]: E0117 01:19:44.969635 2689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3996ebc6-9eb5-4686-84b4-4c62f64f0ca5-calico-apiserver-certs podName:3996ebc6-9eb5-4686-84b4-4c62f64f0ca5 nodeName:}" failed. No retries permitted until 2026-01-17 01:19:45.469615342 +0000 UTC m=+42.680115219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3996ebc6-9eb5-4686-84b4-4c62f64f0ca5-calico-apiserver-certs") pod "calico-apiserver-8559fb66ff-gwd8z" (UID: "3996ebc6-9eb5-4686-84b4-4c62f64f0ca5") : failed to sync secret cache: timed out waiting for the condition Jan 17 01:19:45.508534 kubelet[2689]: I0117 01:19:45.507589 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:19:45.509473 containerd[1503]: time="2026-01-17T01:19:45.509206404Z" level=info msg="StopPodSandbox for \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\"" Jan 17 01:19:45.510973 containerd[1503]: time="2026-01-17T01:19:45.510533780Z" level=info msg="Ensure that sandbox 38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815 in task-service has been cleanup successfully" Jan 17 01:19:45.512543 kubelet[2689]: I0117 01:19:45.512346 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:19:45.513315 containerd[1503]: time="2026-01-17T01:19:45.512995288Z" level=info msg="StopPodSandbox for \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\"" Jan 17 01:19:45.513315 containerd[1503]: time="2026-01-17T01:19:45.513207200Z" level=info msg="Ensure that sandbox 4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49 in task-service has been cleanup successfully" Jan 17 01:19:45.519248 kubelet[2689]: I0117 01:19:45.518482 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:19:45.521220 containerd[1503]: time="2026-01-17T01:19:45.521175403Z" level=info msg="StopPodSandbox for \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\"" Jan 17 01:19:45.523327 containerd[1503]: time="2026-01-17T01:19:45.523011025Z" level=info msg="Ensure that sandbox 4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388 in task-service has been cleanup successfully" Jan 17 01:19:45.524697 kubelet[2689]: I0117 01:19:45.524631 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:19:45.528250 containerd[1503]: time="2026-01-17T01:19:45.528066366Z" level=info msg="StopPodSandbox for \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\"" Jan 17 01:19:45.529001 containerd[1503]: time="2026-01-17T01:19:45.528875278Z" level=info msg="Ensure that sandbox 447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9 in task-service has been cleanup successfully" Jan 17 01:19:45.534453 kubelet[2689]: I0117 01:19:45.534414 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:19:45.536408 containerd[1503]: time="2026-01-17T01:19:45.535778486Z" level=info msg="StopPodSandbox for \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\"" Jan 17 01:19:45.536408 containerd[1503]: time="2026-01-17T01:19:45.536014770Z" level=info msg="Ensure that sandbox 230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742 in task-service has been cleanup successfully" Jan 17 01:19:45.616224 containerd[1503]: time="2026-01-17T01:19:45.616165972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b97d9fcf7-9dv76,Uid:94302249-2c45-4fa9-a8ed-6356d7141062,Namespace:calico-apiserver,Attempt:0,}" Jan 17 01:19:45.632112 containerd[1503]: time="2026-01-17T01:19:45.632058131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559fb66ff-gwd8z,Uid:3996ebc6-9eb5-4686-84b4-4c62f64f0ca5,Namespace:calico-apiserver,Attempt:0,}" Jan 17 01:19:45.645700 containerd[1503]: time="2026-01-17T01:19:45.645642253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b97d9fcf7-cbr65,Uid:e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0,Namespace:calico-apiserver,Attempt:0,}" Jan 17 01:19:45.674628 containerd[1503]: time="2026-01-17T01:19:45.674569037Z" level=error msg="StopPodSandbox for \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\" failed" error="failed to destroy network for sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.688402 kubelet[2689]: E0117 01:19:45.688038 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:19:45.688402 kubelet[2689]: E0117 01:19:45.688114 2689 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815"} Jan 17 01:19:45.688402 kubelet[2689]: E0117 01:19:45.688162 2689 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03cd3dbd-5e1b-4532-9c89-eb080f7c53df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:19:45.688402 kubelet[2689]: E0117 01:19:45.688210 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03cd3dbd-5e1b-4532-9c89-eb080f7c53df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df" Jan 17 01:19:45.694176 containerd[1503]: time="2026-01-17T01:19:45.694006734Z" level=error msg="StopPodSandbox for \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\" failed" error="failed to destroy network for sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.694652 containerd[1503]: time="2026-01-17T01:19:45.694408078Z" level=error msg="StopPodSandbox for \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\" failed" error="failed to destroy network for sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.694741 kubelet[2689]: E0117 01:19:45.694667 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:19:45.694868 kubelet[2689]: E0117 01:19:45.694758 2689 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742"} Jan 17 01:19:45.694868 kubelet[2689]: E0117 01:19:45.694802 2689 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"612dd22b-6369-4c99-85b1-59da9f6da310\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:19:45.694868 kubelet[2689]: E0117 01:19:45.694840 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"612dd22b-6369-4c99-85b1-59da9f6da310\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fj97p" podUID="612dd22b-6369-4c99-85b1-59da9f6da310" Jan 17 01:19:45.695811 kubelet[2689]: E0117 01:19:45.694890 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:19:45.695811 kubelet[2689]: E0117 01:19:45.694918 2689 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388"} Jan 17 01:19:45.695811 kubelet[2689]: E0117 01:19:45.694945 2689 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"569a36a0-46e6-4752-8b8f-005d85b2712f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:19:45.695811 kubelet[2689]: E0117 01:19:45.694976 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"569a36a0-46e6-4752-8b8f-005d85b2712f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:19:45.696094 containerd[1503]: time="2026-01-17T01:19:45.695130712Z" level=error msg="StopPodSandbox for \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\" failed" error="failed to destroy network for sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.696166 kubelet[2689]: E0117 01:19:45.695378 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:19:45.696166 kubelet[2689]: E0117 01:19:45.695415 2689 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49"} Jan 17 01:19:45.696166 kubelet[2689]: E0117 01:19:45.695447 2689 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"722a4a78-bbcc-4e35-a380-cd81c1aedcd6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:19:45.696166 kubelet[2689]: E0117 01:19:45.695474 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"722a4a78-bbcc-4e35-a380-cd81c1aedcd6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" podUID="722a4a78-bbcc-4e35-a380-cd81c1aedcd6" Jan 17 01:19:45.702503 containerd[1503]: time="2026-01-17T01:19:45.701420237Z" level=error msg="StopPodSandbox for \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\" failed" error="failed to destroy network for sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.702607 kubelet[2689]: E0117 01:19:45.702496 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:19:45.702607 kubelet[2689]: E0117 01:19:45.702557 2689 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9"} Jan 17 01:19:45.702607 kubelet[2689]: E0117 01:19:45.702601 2689 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5fbd3817-1243-49f3-ac44-7bd3e32af698\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:19:45.702839 kubelet[2689]: E0117 01:19:45.702641 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5fbd3817-1243-49f3-ac44-7bd3e32af698\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5665f4c7d8-97hgh" podUID="5fbd3817-1243-49f3-ac44-7bd3e32af698" Jan 17 01:19:45.833479 containerd[1503]: time="2026-01-17T01:19:45.833410526Z" level=error msg="Failed to destroy network for sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.833972 containerd[1503]: time="2026-01-17T01:19:45.833931598Z" level=error msg="encountered an error cleaning up failed sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.838394 containerd[1503]: time="2026-01-17T01:19:45.838341489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559fb66ff-gwd8z,Uid:3996ebc6-9eb5-4686-84b4-4c62f64f0ca5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.839121 kubelet[2689]: E0117 01:19:45.839067 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.839242 kubelet[2689]: E0117 01:19:45.839194 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" Jan 17 01:19:45.839349 kubelet[2689]: E0117 01:19:45.839233 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" Jan 17 01:19:45.840001 kubelet[2689]: E0117 01:19:45.839360 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8559fb66ff-gwd8z_calico-apiserver(3996ebc6-9eb5-4686-84b4-4c62f64f0ca5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8559fb66ff-gwd8z_calico-apiserver(3996ebc6-9eb5-4686-84b4-4c62f64f0ca5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" podUID="3996ebc6-9eb5-4686-84b4-4c62f64f0ca5" Jan 17 01:19:45.851533 containerd[1503]: time="2026-01-17T01:19:45.851457213Z" level=error msg="Failed to destroy network for sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.852167 containerd[1503]: time="2026-01-17T01:19:45.852124291Z" level=error msg="encountered an error cleaning up failed sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.852303 containerd[1503]: time="2026-01-17T01:19:45.852217356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b97d9fcf7-cbr65,Uid:e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.852633 kubelet[2689]: E0117 01:19:45.852582 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.852735 kubelet[2689]: E0117 01:19:45.852710 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" Jan 17 01:19:45.852803 kubelet[2689]: E0117 01:19:45.852740 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" Jan 17 01:19:45.853208 kubelet[2689]: E0117 01:19:45.852836 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b97d9fcf7-cbr65_calico-apiserver(e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b97d9fcf7-cbr65_calico-apiserver(e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:19:45.860560 containerd[1503]: time="2026-01-17T01:19:45.860415298Z" level=error msg="Failed to destroy network for sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.861144 containerd[1503]: time="2026-01-17T01:19:45.860977232Z" level=error msg="encountered an error cleaning up failed sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.861144 containerd[1503]: time="2026-01-17T01:19:45.861049898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b97d9fcf7-9dv76,Uid:94302249-2c45-4fa9-a8ed-6356d7141062,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.861515 kubelet[2689]: E0117 01:19:45.861337 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:45.861515 kubelet[2689]: E0117 01:19:45.861399 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" Jan 17 01:19:45.861515 kubelet[2689]: E0117 01:19:45.861425 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" Jan 17 01:19:45.861987 kubelet[2689]: E0117 01:19:45.861937 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b97d9fcf7-9dv76_calico-apiserver(94302249-2c45-4fa9-a8ed-6356d7141062)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b97d9fcf7-9dv76_calico-apiserver(94302249-2c45-4fa9-a8ed-6356d7141062)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" podUID="94302249-2c45-4fa9-a8ed-6356d7141062" Jan 17 01:19:46.539946 kubelet[2689]: I0117 01:19:46.539879 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:19:46.542092 containerd[1503]: time="2026-01-17T01:19:46.541887824Z" level=info msg="StopPodSandbox for \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\"" Jan 17 01:19:46.543681 containerd[1503]: time="2026-01-17T01:19:46.543116556Z" level=info msg="Ensure that sandbox 37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef in task-service has been cleanup successfully" Jan 17 01:19:46.545654 kubelet[2689]: I0117 01:19:46.545612 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:19:46.547304 containerd[1503]: time="2026-01-17T01:19:46.547121292Z" level=info msg="StopPodSandbox for \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\"" Jan 17 01:19:46.547729 containerd[1503]: time="2026-01-17T01:19:46.547513860Z" level=info msg="Ensure that sandbox 51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c in task-service has been cleanup successfully" Jan 17 01:19:46.557498 kubelet[2689]: I0117 01:19:46.556577 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:19:46.558404 containerd[1503]: time="2026-01-17T01:19:46.558163304Z" level=info msg="StopPodSandbox for \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\"" Jan 17 01:19:46.558518 containerd[1503]: time="2026-01-17T01:19:46.558487560Z" level=info msg="Ensure that sandbox 2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134 in task-service has been cleanup successfully" Jan 17 01:19:46.600602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef-shm.mount: Deactivated successfully. Jan 17 01:19:46.600797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134-shm.mount: Deactivated successfully. Jan 17 01:19:46.644910 containerd[1503]: time="2026-01-17T01:19:46.644590023Z" level=error msg="StopPodSandbox for \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\" failed" error="failed to destroy network for sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:46.644910 containerd[1503]: time="2026-01-17T01:19:46.644805942Z" level=error msg="StopPodSandbox for \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\" failed" error="failed to destroy network for sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:46.645871 kubelet[2689]: E0117 01:19:46.645173 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:19:46.645871 kubelet[2689]: E0117 01:19:46.645377 2689 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef"} Jan 17 01:19:46.645871 kubelet[2689]: E0117 01:19:46.645465 2689 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:19:46.645871 kubelet[2689]: E0117 01:19:46.645142 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:19:46.645871 kubelet[2689]: E0117 01:19:46.645590 2689 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c"} Jan 17 01:19:46.646407 kubelet[2689]: E0117 01:19:46.645654 2689 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3996ebc6-9eb5-4686-84b4-4c62f64f0ca5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:19:46.646407 kubelet[2689]: E0117 01:19:46.645701 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3996ebc6-9eb5-4686-84b4-4c62f64f0ca5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" podUID="3996ebc6-9eb5-4686-84b4-4c62f64f0ca5" Jan 17 01:19:46.646407 kubelet[2689]: E0117 01:19:46.645586 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:19:46.655677 containerd[1503]: time="2026-01-17T01:19:46.655469254Z" level=error msg="StopPodSandbox for \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\" failed" error="failed to destroy network for sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:46.656021 kubelet[2689]: E0117 01:19:46.655944 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:19:46.656154 kubelet[2689]: E0117 01:19:46.656041 2689 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134"} Jan 17 01:19:46.656154 kubelet[2689]: E0117 01:19:46.656097 2689 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94302249-2c45-4fa9-a8ed-6356d7141062\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:19:46.656372 kubelet[2689]: E0117 01:19:46.656156 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94302249-2c45-4fa9-a8ed-6356d7141062\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" podUID="94302249-2c45-4fa9-a8ed-6356d7141062" Jan 17 01:19:55.133425 containerd[1503]: time="2026-01-17T01:19:55.132883385Z" level=info msg="StopPodSandbox for \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\"" Jan 17 01:19:55.218752 containerd[1503]: time="2026-01-17T01:19:55.218583093Z" level=error msg="StopPodSandbox for \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\" failed" error="failed to destroy network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 01:19:55.220577 kubelet[2689]: E0117 01:19:55.219846 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:19:55.220577 kubelet[2689]: E0117 01:19:55.219963 2689 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561"} Jan 17 01:19:55.220577 kubelet[2689]: E0117 01:19:55.220010 2689 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26acde11-c245-44e9-abdd-2ebc74cfcad2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 01:19:55.220577 kubelet[2689]: E0117 01:19:55.220049 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26acde11-c245-44e9-abdd-2ebc74cfcad2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x4jzn" podUID="26acde11-c245-44e9-abdd-2ebc74cfcad2" Jan 17 01:19:55.919066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2311132417.mount: Deactivated successfully. Jan 17 01:19:56.000605 containerd[1503]: time="2026-01-17T01:19:55.999511728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:56.005114 containerd[1503]: time="2026-01-17T01:19:56.005053607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 01:19:56.018104 containerd[1503]: time="2026-01-17T01:19:56.018050957Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:56.020149 containerd[1503]: time="2026-01-17T01:19:56.020115125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:19:56.024838 containerd[1503]: time="2026-01-17T01:19:56.023748565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.508044437s" Jan 17 01:19:56.024838 containerd[1503]: time="2026-01-17T01:19:56.023810481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 01:19:56.085021 containerd[1503]: time="2026-01-17T01:19:56.084955471Z" level=info msg="CreateContainer within sandbox \"3c77b526ead17197d701a212fa14c87d6ccf527bd549bff3d931c27353f168a9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 01:19:56.204867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3495714965.mount: Deactivated successfully. Jan 17 01:19:56.230464 containerd[1503]: time="2026-01-17T01:19:56.230397055Z" level=info msg="CreateContainer within sandbox \"3c77b526ead17197d701a212fa14c87d6ccf527bd549bff3d931c27353f168a9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e0138372493699c832102689e70bd12d2d10486c18b5f4a4cf531a01ec144c76\"" Jan 17 01:19:56.240892 containerd[1503]: time="2026-01-17T01:19:56.240018711Z" level=info msg="StartContainer for \"e0138372493699c832102689e70bd12d2d10486c18b5f4a4cf531a01ec144c76\"" Jan 17 01:19:56.340505 systemd[1]: Started cri-containerd-e0138372493699c832102689e70bd12d2d10486c18b5f4a4cf531a01ec144c76.scope - libcontainer container e0138372493699c832102689e70bd12d2d10486c18b5f4a4cf531a01ec144c76. Jan 17 01:19:56.389970 containerd[1503]: time="2026-01-17T01:19:56.389698723Z" level=info msg="StartContainer for \"e0138372493699c832102689e70bd12d2d10486c18b5f4a4cf531a01ec144c76\" returns successfully" Jan 17 01:19:56.610893 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 01:19:56.612236 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 01:19:56.688376 kubelet[2689]: I0117 01:19:56.685169 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7tl79" podStartSLOduration=1.48495741 podStartE2EDuration="28.670712769s" podCreationTimestamp="2026-01-17 01:19:28 +0000 UTC" firstStartedPulling="2026-01-17 01:19:28.846233133 +0000 UTC m=+26.056733005" lastFinishedPulling="2026-01-17 01:19:56.031988491 +0000 UTC m=+53.242488364" observedRunningTime="2026-01-17 01:19:56.659848557 +0000 UTC m=+53.870348470" watchObservedRunningTime="2026-01-17 01:19:56.670712769 +0000 UTC m=+53.881212662" Jan 17 01:19:56.971920 containerd[1503]: time="2026-01-17T01:19:56.971740960Z" level=info msg="StopPodSandbox for \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\"" Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.104 [INFO][3980] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.106 [INFO][3980] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" iface="eth0" netns="/var/run/netns/cni-f4fe421e-f624-e6a1-8a3a-65761ffc02c6" Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.107 [INFO][3980] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" iface="eth0" netns="/var/run/netns/cni-f4fe421e-f624-e6a1-8a3a-65761ffc02c6" Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.109 [INFO][3980] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" iface="eth0" netns="/var/run/netns/cni-f4fe421e-f624-e6a1-8a3a-65761ffc02c6" Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.109 [INFO][3980] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.109 [INFO][3980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.358 [INFO][3987] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" HandleID="k8s-pod-network.447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.361 [INFO][3987] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.363 [INFO][3987] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.386 [WARNING][3987] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" HandleID="k8s-pod-network.447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.386 [INFO][3987] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" HandleID="k8s-pod-network.447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.390 [INFO][3987] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:19:57.397101 containerd[1503]: 2026-01-17 01:19:57.394 [INFO][3980] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:19:57.403797 containerd[1503]: time="2026-01-17T01:19:57.398045001Z" level=info msg="TearDown network for sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\" successfully" Jan 17 01:19:57.403797 containerd[1503]: time="2026-01-17T01:19:57.398095400Z" level=info msg="StopPodSandbox for \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\" returns successfully" Jan 17 01:19:57.405171 systemd[1]: run-netns-cni\x2df4fe421e\x2df624\x2de6a1\x2d8a3a\x2d65761ffc02c6.mount: Deactivated successfully. Jan 17 01:19:57.484358 kubelet[2689]: I0117 01:19:57.483538 2689 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fbd3817-1243-49f3-ac44-7bd3e32af698-whisker-backend-key-pair\") pod \"5fbd3817-1243-49f3-ac44-7bd3e32af698\" (UID: \"5fbd3817-1243-49f3-ac44-7bd3e32af698\") " Jan 17 01:19:57.484358 kubelet[2689]: I0117 01:19:57.483632 2689 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fbd3817-1243-49f3-ac44-7bd3e32af698-whisker-ca-bundle\") pod \"5fbd3817-1243-49f3-ac44-7bd3e32af698\" (UID: \"5fbd3817-1243-49f3-ac44-7bd3e32af698\") " Jan 17 01:19:57.484358 kubelet[2689]: I0117 01:19:57.483674 2689 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2ml2\" (UniqueName: \"kubernetes.io/projected/5fbd3817-1243-49f3-ac44-7bd3e32af698-kube-api-access-v2ml2\") pod \"5fbd3817-1243-49f3-ac44-7bd3e32af698\" (UID: \"5fbd3817-1243-49f3-ac44-7bd3e32af698\") " Jan 17 01:19:57.515303 kubelet[2689]: I0117 01:19:57.505889 2689 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fbd3817-1243-49f3-ac44-7bd3e32af698-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5fbd3817-1243-49f3-ac44-7bd3e32af698" (UID: "5fbd3817-1243-49f3-ac44-7bd3e32af698"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 01:19:57.514831 systemd[1]: var-lib-kubelet-pods-5fbd3817\x2d1243\x2d49f3\x2dac44\x2d7bd3e32af698-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 01:19:57.515851 kubelet[2689]: I0117 01:19:57.515815 2689 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fbd3817-1243-49f3-ac44-7bd3e32af698-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5fbd3817-1243-49f3-ac44-7bd3e32af698" (UID: "5fbd3817-1243-49f3-ac44-7bd3e32af698"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 01:19:57.518873 kubelet[2689]: I0117 01:19:57.518836 2689 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fbd3817-1243-49f3-ac44-7bd3e32af698-kube-api-access-v2ml2" (OuterVolumeSpecName: "kube-api-access-v2ml2") pod "5fbd3817-1243-49f3-ac44-7bd3e32af698" (UID: "5fbd3817-1243-49f3-ac44-7bd3e32af698"). InnerVolumeSpecName "kube-api-access-v2ml2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 01:19:57.520368 systemd[1]: var-lib-kubelet-pods-5fbd3817\x2d1243\x2d49f3\x2dac44\x2d7bd3e32af698-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv2ml2.mount: Deactivated successfully. Jan 17 01:19:57.584510 kubelet[2689]: I0117 01:19:57.584434 2689 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fbd3817-1243-49f3-ac44-7bd3e32af698-whisker-ca-bundle\") on node \"srv-3374x.gb1.brightbox.com\" DevicePath \"\"" Jan 17 01:19:57.584510 kubelet[2689]: I0117 01:19:57.584508 2689 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v2ml2\" (UniqueName: \"kubernetes.io/projected/5fbd3817-1243-49f3-ac44-7bd3e32af698-kube-api-access-v2ml2\") on node \"srv-3374x.gb1.brightbox.com\" DevicePath \"\"" Jan 17 01:19:57.584791 kubelet[2689]: I0117 01:19:57.584538 2689 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fbd3817-1243-49f3-ac44-7bd3e32af698-whisker-backend-key-pair\") on node \"srv-3374x.gb1.brightbox.com\" DevicePath \"\"" Jan 17 01:19:57.635528 kubelet[2689]: I0117 01:19:57.634976 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 01:19:57.655607 systemd[1]: Removed slice kubepods-besteffort-pod5fbd3817_1243_49f3_ac44_7bd3e32af698.slice - libcontainer container kubepods-besteffort-pod5fbd3817_1243_49f3_ac44_7bd3e32af698.slice. Jan 17 01:19:57.813453 systemd[1]: Created slice kubepods-besteffort-pod87f78f88_02a9_4300_bfd6_d78b47321ed8.slice - libcontainer container kubepods-besteffort-pod87f78f88_02a9_4300_bfd6_d78b47321ed8.slice. Jan 17 01:19:57.887317 kubelet[2689]: I0117 01:19:57.886673 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87f78f88-02a9-4300-bfd6-d78b47321ed8-whisker-backend-key-pair\") pod \"whisker-56bdf7c7c8-4pst2\" (UID: \"87f78f88-02a9-4300-bfd6-d78b47321ed8\") " pod="calico-system/whisker-56bdf7c7c8-4pst2" Jan 17 01:19:57.888430 kubelet[2689]: I0117 01:19:57.887424 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87f78f88-02a9-4300-bfd6-d78b47321ed8-whisker-ca-bundle\") pod \"whisker-56bdf7c7c8-4pst2\" (UID: \"87f78f88-02a9-4300-bfd6-d78b47321ed8\") " pod="calico-system/whisker-56bdf7c7c8-4pst2" Jan 17 01:19:57.888430 kubelet[2689]: I0117 01:19:57.887476 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99dsh\" (UniqueName: \"kubernetes.io/projected/87f78f88-02a9-4300-bfd6-d78b47321ed8-kube-api-access-99dsh\") pod \"whisker-56bdf7c7c8-4pst2\" (UID: \"87f78f88-02a9-4300-bfd6-d78b47321ed8\") " pod="calico-system/whisker-56bdf7c7c8-4pst2" Jan 17 01:19:58.125432 containerd[1503]: time="2026-01-17T01:19:58.124658012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56bdf7c7c8-4pst2,Uid:87f78f88-02a9-4300-bfd6-d78b47321ed8,Namespace:calico-system,Attempt:0,}" Jan 17 01:19:58.129585 containerd[1503]: time="2026-01-17T01:19:58.129534409Z" level=info msg="StopPodSandbox for \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\"" Jan 17 01:19:58.586013 systemd-networkd[1426]: caliad31714aebc: Link UP Jan 17 01:19:58.600918 systemd-networkd[1426]: caliad31714aebc: Gained carrier Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.357 [INFO][4060] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.359 [INFO][4060] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" iface="eth0" netns="/var/run/netns/cni-4270b827-cae1-831a-ca76-72a88ce6fbe5" Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.359 [INFO][4060] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" iface="eth0" netns="/var/run/netns/cni-4270b827-cae1-831a-ca76-72a88ce6fbe5" Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.361 [INFO][4060] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" iface="eth0" netns="/var/run/netns/cni-4270b827-cae1-831a-ca76-72a88ce6fbe5" Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.361 [INFO][4060] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.363 [INFO][4060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.481 [INFO][4084] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" HandleID="k8s-pod-network.4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.481 [INFO][4084] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.538 [INFO][4084] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.565 [WARNING][4084] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" HandleID="k8s-pod-network.4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.565 [INFO][4084] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" HandleID="k8s-pod-network.4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.576 [INFO][4084] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:19:58.624110 containerd[1503]: 2026-01-17 01:19:58.619 [INFO][4060] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:19:58.634191 containerd[1503]: time="2026-01-17T01:19:58.624070410Z" level=info msg="TearDown network for sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\" successfully" Jan 17 01:19:58.634191 containerd[1503]: time="2026-01-17T01:19:58.624390259Z" level=info msg="StopPodSandbox for \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\" returns successfully" Jan 17 01:19:58.637729 containerd[1503]: time="2026-01-17T01:19:58.637483103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796c9cbbf8-tx6d5,Uid:722a4a78-bbcc-4e35-a380-cd81c1aedcd6,Namespace:calico-system,Attempt:1,}" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.285 [INFO][4064] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.310 [INFO][4064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0 whisker-56bdf7c7c8- calico-system 87f78f88-02a9-4300-bfd6-d78b47321ed8 987 0 2026-01-17 01:19:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56bdf7c7c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-3374x.gb1.brightbox.com whisker-56bdf7c7c8-4pst2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliad31714aebc [] [] }} ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Namespace="calico-system" Pod="whisker-56bdf7c7c8-4pst2" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.311 [INFO][4064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Namespace="calico-system" Pod="whisker-56bdf7c7c8-4pst2" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.445 [INFO][4079] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" HandleID="k8s-pod-network.360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.446 [INFO][4079] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" HandleID="k8s-pod-network.360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332f40), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3374x.gb1.brightbox.com", "pod":"whisker-56bdf7c7c8-4pst2", "timestamp":"2026-01-17 01:19:58.445437531 +0000 UTC"}, Hostname:"srv-3374x.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.446 [INFO][4079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.446 [INFO][4079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.447 [INFO][4079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3374x.gb1.brightbox.com' Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.472 [INFO][4079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.491 [INFO][4079] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.499 [INFO][4079] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.505 [INFO][4079] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.511 [INFO][4079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.511 [INFO][4079] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.515 [INFO][4079] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.523 [INFO][4079] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.538 [INFO][4079] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.129/26] block=192.168.2.128/26 handle="k8s-pod-network.360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.538 [INFO][4079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.129/26] handle="k8s-pod-network.360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.538 [INFO][4079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:19:58.671722 containerd[1503]: 2026-01-17 01:19:58.538 [INFO][4079] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.129/26] IPv6=[] ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" HandleID="k8s-pod-network.360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0" Jan 17 01:19:58.673869 containerd[1503]: 2026-01-17 01:19:58.546 [INFO][4064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Namespace="calico-system" Pod="whisker-56bdf7c7c8-4pst2" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0", GenerateName:"whisker-56bdf7c7c8-", Namespace:"calico-system", SelfLink:"", UID:"87f78f88-02a9-4300-bfd6-d78b47321ed8", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56bdf7c7c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"", Pod:"whisker-56bdf7c7c8-4pst2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad31714aebc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:19:58.673869 containerd[1503]: 2026-01-17 01:19:58.546 [INFO][4064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.129/32] ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Namespace="calico-system" Pod="whisker-56bdf7c7c8-4pst2" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0" Jan 17 01:19:58.673869 containerd[1503]: 2026-01-17 01:19:58.546 [INFO][4064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad31714aebc ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Namespace="calico-system" Pod="whisker-56bdf7c7c8-4pst2" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0" Jan 17 01:19:58.673869 containerd[1503]: 2026-01-17 01:19:58.617 [INFO][4064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Namespace="calico-system" Pod="whisker-56bdf7c7c8-4pst2" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0" Jan 17 01:19:58.673869 containerd[1503]: 2026-01-17 01:19:58.618 [INFO][4064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Namespace="calico-system" Pod="whisker-56bdf7c7c8-4pst2" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0", GenerateName:"whisker-56bdf7c7c8-", Namespace:"calico-system", SelfLink:"", UID:"87f78f88-02a9-4300-bfd6-d78b47321ed8", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56bdf7c7c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b", Pod:"whisker-56bdf7c7c8-4pst2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad31714aebc", MAC:"d2:57:de:ed:94:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:19:58.673869 containerd[1503]: 2026-01-17 01:19:58.663 [INFO][4064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b" Namespace="calico-system" Pod="whisker-56bdf7c7c8-4pst2" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-whisker--56bdf7c7c8--4pst2-eth0" Jan 17 01:19:58.768651 containerd[1503]: time="2026-01-17T01:19:58.768484492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:19:58.772678 containerd[1503]: time="2026-01-17T01:19:58.771858102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:19:58.772678 containerd[1503]: time="2026-01-17T01:19:58.771890186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:58.772678 containerd[1503]: time="2026-01-17T01:19:58.772042994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:58.837414 systemd[1]: Started cri-containerd-360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b.scope - libcontainer container 360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b. Jan 17 01:19:58.927549 systemd[1]: run-netns-cni\x2d4270b827\x2dcae1\x2d831a\x2dca76\x2d72a88ce6fbe5.mount: Deactivated successfully. Jan 17 01:19:59.068037 systemd-networkd[1426]: cali0c5137f3eba: Link UP Jan 17 01:19:59.069207 systemd-networkd[1426]: cali0c5137f3eba: Gained carrier Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:58.806 [INFO][4145] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:58.863 [INFO][4145] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0 calico-kube-controllers-796c9cbbf8- calico-system 722a4a78-bbcc-4e35-a380-cd81c1aedcd6 992 0 2026-01-17 01:19:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:796c9cbbf8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-3374x.gb1.brightbox.com calico-kube-controllers-796c9cbbf8-tx6d5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0c5137f3eba [] [] }} ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Namespace="calico-system" Pod="calico-kube-controllers-796c9cbbf8-tx6d5" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:58.863 [INFO][4145] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Namespace="calico-system" Pod="calico-kube-controllers-796c9cbbf8-tx6d5" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:58.985 [INFO][4223] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" HandleID="k8s-pod-network.2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:58.985 [INFO][4223] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" HandleID="k8s-pod-network.2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000329d60), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3374x.gb1.brightbox.com", "pod":"calico-kube-controllers-796c9cbbf8-tx6d5", "timestamp":"2026-01-17 01:19:58.985561998 +0000 UTC"}, Hostname:"srv-3374x.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:58.985 [INFO][4223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:58.985 [INFO][4223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:58.986 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3374x.gb1.brightbox.com' Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.009 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.019 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.026 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.029 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.034 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.035 [INFO][4223] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.038 [INFO][4223] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16 Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.047 [INFO][4223] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.056 [INFO][4223] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.130/26] block=192.168.2.128/26 handle="k8s-pod-network.2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.056 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.130/26] handle="k8s-pod-network.2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" host="srv-3374x.gb1.brightbox.com" Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.057 [INFO][4223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:19:59.101035 containerd[1503]: 2026-01-17 01:19:59.057 [INFO][4223] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.130/26] IPv6=[] ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" HandleID="k8s-pod-network.2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:19:59.102152 containerd[1503]: 2026-01-17 01:19:59.062 [INFO][4145] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Namespace="calico-system" Pod="calico-kube-controllers-796c9cbbf8-tx6d5" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0", GenerateName:"calico-kube-controllers-796c9cbbf8-", Namespace:"calico-system", SelfLink:"", UID:"722a4a78-bbcc-4e35-a380-cd81c1aedcd6", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796c9cbbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-796c9cbbf8-tx6d5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c5137f3eba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:19:59.102152 containerd[1503]: 2026-01-17 01:19:59.064 [INFO][4145] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.130/32] ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Namespace="calico-system" Pod="calico-kube-controllers-796c9cbbf8-tx6d5" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:19:59.102152 containerd[1503]: 2026-01-17 01:19:59.064 [INFO][4145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c5137f3eba ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Namespace="calico-system" Pod="calico-kube-controllers-796c9cbbf8-tx6d5" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:19:59.102152 containerd[1503]: 2026-01-17 01:19:59.070 [INFO][4145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Namespace="calico-system" Pod="calico-kube-controllers-796c9cbbf8-tx6d5" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:19:59.102152 containerd[1503]: 2026-01-17 01:19:59.072 [INFO][4145] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Namespace="calico-system" Pod="calico-kube-controllers-796c9cbbf8-tx6d5" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0", GenerateName:"calico-kube-controllers-796c9cbbf8-", Namespace:"calico-system", SelfLink:"", UID:"722a4a78-bbcc-4e35-a380-cd81c1aedcd6", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796c9cbbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16", Pod:"calico-kube-controllers-796c9cbbf8-tx6d5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c5137f3eba", MAC:"0e:7d:04:28:7d:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:19:59.102152 containerd[1503]: 2026-01-17 01:19:59.088 [INFO][4145] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16" Namespace="calico-system" Pod="calico-kube-controllers-796c9cbbf8-tx6d5" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:19:59.128590 containerd[1503]: time="2026-01-17T01:19:59.128009333Z" level=info msg="StopPodSandbox for \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\"" Jan 17 01:19:59.131792 containerd[1503]: time="2026-01-17T01:19:59.131315102Z" level=info msg="StopPodSandbox for \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\"" Jan 17 01:19:59.143034 kubelet[2689]: I0117 01:19:59.142488 2689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fbd3817-1243-49f3-ac44-7bd3e32af698" path="/var/lib/kubelet/pods/5fbd3817-1243-49f3-ac44-7bd3e32af698/volumes" Jan 17 01:19:59.172183 containerd[1503]: time="2026-01-17T01:19:59.171042107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:19:59.172183 containerd[1503]: time="2026-01-17T01:19:59.171150487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:19:59.172183 containerd[1503]: time="2026-01-17T01:19:59.171173816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:59.174169 containerd[1503]: time="2026-01-17T01:19:59.173769947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:19:59.314854 systemd[1]: run-containerd-runc-k8s.io-2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16-runc.mgHkij.mount: Deactivated successfully. Jan 17 01:19:59.332564 systemd[1]: Started cri-containerd-2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16.scope - libcontainer container 2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16. Jan 17 01:19:59.353858 containerd[1503]: time="2026-01-17T01:19:59.352084079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56bdf7c7c8-4pst2,Uid:87f78f88-02a9-4300-bfd6-d78b47321ed8,Namespace:calico-system,Attempt:0,} returns sandbox id \"360553f4159486d79b4e635849e27cf242672ed17203e7ded2320125baa2126b\"" Jan 17 01:19:59.367893 containerd[1503]: time="2026-01-17T01:19:59.367016807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.480 [INFO][4283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.481 [INFO][4283] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" iface="eth0" netns="/var/run/netns/cni-aaaa4416-4afa-6d2c-c377-892b4c7d4b1b" Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.481 [INFO][4283] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" iface="eth0" netns="/var/run/netns/cni-aaaa4416-4afa-6d2c-c377-892b4c7d4b1b" Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.482 [INFO][4283] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" iface="eth0" netns="/var/run/netns/cni-aaaa4416-4afa-6d2c-c377-892b4c7d4b1b" Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.482 [INFO][4283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.482 [INFO][4283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.557 [INFO][4339] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" HandleID="k8s-pod-network.2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.557 [INFO][4339] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.557 [INFO][4339] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.572 [WARNING][4339] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" HandleID="k8s-pod-network.2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.572 [INFO][4339] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" HandleID="k8s-pod-network.2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.574 [INFO][4339] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:19:59.584229 containerd[1503]: 2026-01-17 01:19:59.581 [INFO][4283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:19:59.586513 containerd[1503]: time="2026-01-17T01:19:59.586384775Z" level=info msg="TearDown network for sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\" successfully" Jan 17 01:19:59.586705 containerd[1503]: time="2026-01-17T01:19:59.586597522Z" level=info msg="StopPodSandbox for \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\" returns successfully" Jan 17 01:19:59.602938 containerd[1503]: time="2026-01-17T01:19:59.602865036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b97d9fcf7-9dv76,Uid:94302249-2c45-4fa9-a8ed-6356d7141062,Namespace:calico-apiserver,Attempt:1,}" Jan 17 01:19:59.624222 containerd[1503]: time="2026-01-17T01:19:59.623976096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796c9cbbf8-tx6d5,Uid:722a4a78-bbcc-4e35-a380-cd81c1aedcd6,Namespace:calico-system,Attempt:1,} returns sandbox id \"2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16\"" Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.409 [INFO][4270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.411 [INFO][4270] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" iface="eth0" netns="/var/run/netns/cni-64db8a33-0914-ac36-1715-e1b5ca800421" Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.412 [INFO][4270] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" iface="eth0" netns="/var/run/netns/cni-64db8a33-0914-ac36-1715-e1b5ca800421" Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.413 [INFO][4270] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" iface="eth0" netns="/var/run/netns/cni-64db8a33-0914-ac36-1715-e1b5ca800421" Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.414 [INFO][4270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.414 [INFO][4270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.600 [INFO][4323] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" HandleID="k8s-pod-network.51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.601 [INFO][4323] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.601 [INFO][4323] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.636 [WARNING][4323] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" HandleID="k8s-pod-network.51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.637 [INFO][4323] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" HandleID="k8s-pod-network.51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.642 [INFO][4323] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:19:59.672111 containerd[1503]: 2026-01-17 01:19:59.652 [INFO][4270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:19:59.673878 containerd[1503]: time="2026-01-17T01:19:59.672402193Z" level=info msg="TearDown network for sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\" successfully" Jan 17 01:19:59.673878 containerd[1503]: time="2026-01-17T01:19:59.672436442Z" level=info msg="StopPodSandbox for \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\" returns successfully" Jan 17 01:19:59.675196 containerd[1503]: time="2026-01-17T01:19:59.674608938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559fb66ff-gwd8z,Uid:3996ebc6-9eb5-4686-84b4-4c62f64f0ca5,Namespace:calico-apiserver,Attempt:1,}" Jan 17 01:19:59.798086 containerd[1503]: time="2026-01-17T01:19:59.797941851Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:19:59.827223 containerd[1503]: time="2026-01-17T01:19:59.800874850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 01:19:59.827223 containerd[1503]: time="2026-01-17T01:19:59.801742153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 01:19:59.827487 kubelet[2689]: E0117 01:19:59.826817 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:19:59.827487 kubelet[2689]: E0117 01:19:59.826932 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:19:59.831590 containerd[1503]: time="2026-01-17T01:19:59.827802344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 01:19:59.835501 kubelet[2689]: E0117 01:19:59.835441 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-56bdf7c7c8-4pst2_calico-system(87f78f88-02a9-4300-bfd6-d78b47321ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 01:19:59.938718 systemd[1]: run-netns-cni\x2d64db8a33\x2d0914\x2dac36\x2d1715\x2de1b5ca800421.mount: Deactivated successfully. Jan 17 01:19:59.938902 systemd[1]: run-netns-cni\x2daaaa4416\x2d4afa\x2d6d2c\x2dc377\x2d892b4c7d4b1b.mount: Deactivated successfully. Jan 17 01:19:59.980322 systemd-networkd[1426]: cali7df1a56cbe6: Link UP Jan 17 01:19:59.981849 systemd-networkd[1426]: cali7df1a56cbe6: Gained carrier Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.731 [INFO][4355] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.748 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0 calico-apiserver-6b97d9fcf7- calico-apiserver 94302249-2c45-4fa9-a8ed-6356d7141062 1006 0 2026-01-17 01:19:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b97d9fcf7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-3374x.gb1.brightbox.com calico-apiserver-6b97d9fcf7-9dv76 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7df1a56cbe6 [] [] }} ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-9dv76" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.749 [INFO][4355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-9dv76" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.849 [INFO][4384] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" HandleID="k8s-pod-network.c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.852 [INFO][4384] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" HandleID="k8s-pod-network.c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-3374x.gb1.brightbox.com", "pod":"calico-apiserver-6b97d9fcf7-9dv76", "timestamp":"2026-01-17 01:19:59.84992181 +0000 UTC"}, Hostname:"srv-3374x.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.852 [INFO][4384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.852 [INFO][4384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.852 [INFO][4384] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3374x.gb1.brightbox.com' Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.873 [INFO][4384] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.895 [INFO][4384] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.910 [INFO][4384] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.915 [INFO][4384] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.926 [INFO][4384] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.926 [INFO][4384] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.937 [INFO][4384] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.956 [INFO][4384] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.971 [INFO][4384] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.131/26] block=192.168.2.128/26 handle="k8s-pod-network.c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.971 [INFO][4384] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.131/26] handle="k8s-pod-network.c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.971 [INFO][4384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:00.025590 containerd[1503]: 2026-01-17 01:19:59.972 [INFO][4384] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.131/26] IPv6=[] ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" HandleID="k8s-pod-network.c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:00.027416 containerd[1503]: 2026-01-17 01:19:59.975 [INFO][4355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-9dv76" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0", GenerateName:"calico-apiserver-6b97d9fcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"94302249-2c45-4fa9-a8ed-6356d7141062", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b97d9fcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6b97d9fcf7-9dv76", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7df1a56cbe6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:00.027416 containerd[1503]: 2026-01-17 01:19:59.975 [INFO][4355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.131/32] ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-9dv76" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:00.027416 containerd[1503]: 2026-01-17 01:19:59.975 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7df1a56cbe6 ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-9dv76" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:00.027416 containerd[1503]: 2026-01-17 01:19:59.983 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-9dv76" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:00.027416 containerd[1503]: 2026-01-17 01:19:59.983 [INFO][4355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-9dv76" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0", GenerateName:"calico-apiserver-6b97d9fcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"94302249-2c45-4fa9-a8ed-6356d7141062", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b97d9fcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b", Pod:"calico-apiserver-6b97d9fcf7-9dv76", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7df1a56cbe6", MAC:"22:d8:bf:65:52:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:00.027416 containerd[1503]: 2026-01-17 01:20:00.017 [INFO][4355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-9dv76" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:00.152596 systemd-networkd[1426]: cali0c5137f3eba: Gained IPv6LL Jan 17 01:20:00.155631 containerd[1503]: time="2026-01-17T01:20:00.149499966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:20:00.155631 containerd[1503]: time="2026-01-17T01:20:00.149632313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:20:00.155631 containerd[1503]: time="2026-01-17T01:20:00.149656814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:00.155631 containerd[1503]: time="2026-01-17T01:20:00.149990452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:00.193931 containerd[1503]: time="2026-01-17T01:20:00.193762527Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:00.202045 containerd[1503]: time="2026-01-17T01:20:00.201862108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 01:20:00.202045 containerd[1503]: time="2026-01-17T01:20:00.201917093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 01:20:00.206179 kubelet[2689]: E0117 01:20:00.206077 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:20:00.206179 kubelet[2689]: E0117 01:20:00.206151 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:20:00.208729 kubelet[2689]: E0117 01:20:00.206408 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-796c9cbbf8-tx6d5_calico-system(722a4a78-bbcc-4e35-a380-cd81c1aedcd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:00.208729 kubelet[2689]: E0117 01:20:00.206478 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" podUID="722a4a78-bbcc-4e35-a380-cd81c1aedcd6" Jan 17 01:20:00.210917 containerd[1503]: time="2026-01-17T01:20:00.210588333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 01:20:00.222682 systemd[1]: run-containerd-runc-k8s.io-c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b-runc.Togc2y.mount: Deactivated successfully. Jan 17 01:20:00.263551 systemd[1]: Started cri-containerd-c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b.scope - libcontainer container c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b. Jan 17 01:20:00.267633 systemd-networkd[1426]: cali7a0a6f3c10f: Link UP Jan 17 01:20:00.273684 systemd-networkd[1426]: cali7a0a6f3c10f: Gained carrier Jan 17 01:20:00.274602 systemd-networkd[1426]: caliad31714aebc: Gained IPv6LL Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:19:59.806 [INFO][4371] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:19:59.876 [INFO][4371] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0 calico-apiserver-8559fb66ff- calico-apiserver 3996ebc6-9eb5-4686-84b4-4c62f64f0ca5 1005 0 2026-01-17 01:19:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8559fb66ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-3374x.gb1.brightbox.com calico-apiserver-8559fb66ff-gwd8z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a0a6f3c10f [] [] }} ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Namespace="calico-apiserver" Pod="calico-apiserver-8559fb66ff-gwd8z" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:19:59.876 [INFO][4371] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Namespace="calico-apiserver" Pod="calico-apiserver-8559fb66ff-gwd8z" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.024 [INFO][4394] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" HandleID="k8s-pod-network.5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.024 [INFO][4394] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" HandleID="k8s-pod-network.5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039e120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-3374x.gb1.brightbox.com", "pod":"calico-apiserver-8559fb66ff-gwd8z", "timestamp":"2026-01-17 01:20:00.024289515 +0000 UTC"}, Hostname:"srv-3374x.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.024 [INFO][4394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.024 [INFO][4394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.024 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3374x.gb1.brightbox.com' Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.076 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.093 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.118 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.133 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.140 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.141 [INFO][4394] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.155 [INFO][4394] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.177 [INFO][4394] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.226 [INFO][4394] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.132/26] block=192.168.2.128/26 handle="k8s-pod-network.5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.226 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.132/26] handle="k8s-pod-network.5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.226 [INFO][4394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:00.309613 containerd[1503]: 2026-01-17 01:20:00.226 [INFO][4394] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.132/26] IPv6=[] ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" HandleID="k8s-pod-network.5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:00.312827 containerd[1503]: 2026-01-17 01:20:00.254 [INFO][4371] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Namespace="calico-apiserver" Pod="calico-apiserver-8559fb66ff-gwd8z" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0", GenerateName:"calico-apiserver-8559fb66ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"3996ebc6-9eb5-4686-84b4-4c62f64f0ca5", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8559fb66ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-8559fb66ff-gwd8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a0a6f3c10f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:00.312827 containerd[1503]: 2026-01-17 01:20:00.254 [INFO][4371] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.132/32] ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Namespace="calico-apiserver" Pod="calico-apiserver-8559fb66ff-gwd8z" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:00.312827 containerd[1503]: 2026-01-17 01:20:00.254 [INFO][4371] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a0a6f3c10f ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Namespace="calico-apiserver" Pod="calico-apiserver-8559fb66ff-gwd8z" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:00.312827 containerd[1503]: 2026-01-17 01:20:00.281 [INFO][4371] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Namespace="calico-apiserver" Pod="calico-apiserver-8559fb66ff-gwd8z" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:00.312827 containerd[1503]: 2026-01-17 01:20:00.282 [INFO][4371] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Namespace="calico-apiserver" Pod="calico-apiserver-8559fb66ff-gwd8z" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0", GenerateName:"calico-apiserver-8559fb66ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"3996ebc6-9eb5-4686-84b4-4c62f64f0ca5", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8559fb66ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e", Pod:"calico-apiserver-8559fb66ff-gwd8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a0a6f3c10f", MAC:"46:2d:fa:5f:f0:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:00.312827 containerd[1503]: 2026-01-17 01:20:00.304 [INFO][4371] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e" Namespace="calico-apiserver" Pod="calico-apiserver-8559fb66ff-gwd8z" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:00.386344 containerd[1503]: time="2026-01-17T01:20:00.386165895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:20:00.387380 containerd[1503]: time="2026-01-17T01:20:00.386368272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:20:00.387530 containerd[1503]: time="2026-01-17T01:20:00.387375777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:00.388531 containerd[1503]: time="2026-01-17T01:20:00.387519315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:00.450544 systemd[1]: Started cri-containerd-5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e.scope - libcontainer container 5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e. Jan 17 01:20:00.527570 containerd[1503]: time="2026-01-17T01:20:00.527452219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b97d9fcf7-9dv76,Uid:94302249-2c45-4fa9-a8ed-6356d7141062,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b\"" Jan 17 01:20:00.603940 containerd[1503]: time="2026-01-17T01:20:00.603784516Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:00.605799 containerd[1503]: time="2026-01-17T01:20:00.605375844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 01:20:00.605799 containerd[1503]: time="2026-01-17T01:20:00.605489130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 01:20:00.608011 kubelet[2689]: E0117 01:20:00.606570 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:20:00.608011 kubelet[2689]: E0117 01:20:00.606641 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:20:00.608011 kubelet[2689]: E0117 01:20:00.606873 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-56bdf7c7c8-4pst2_calico-system(87f78f88-02a9-4300-bfd6-d78b47321ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:00.609496 kubelet[2689]: E0117 01:20:00.606952 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56bdf7c7c8-4pst2" podUID="87f78f88-02a9-4300-bfd6-d78b47321ed8" Jan 17 01:20:00.612298 containerd[1503]: time="2026-01-17T01:20:00.611953811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8559fb66ff-gwd8z,Uid:3996ebc6-9eb5-4686-84b4-4c62f64f0ca5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e\"" Jan 17 01:20:00.612919 containerd[1503]: time="2026-01-17T01:20:00.612479960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:20:00.674314 kernel: bpftool[4532]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 01:20:00.717654 kubelet[2689]: E0117 01:20:00.716987 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" podUID="722a4a78-bbcc-4e35-a380-cd81c1aedcd6" Jan 17 01:20:00.720542 kubelet[2689]: E0117 01:20:00.719183 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56bdf7c7c8-4pst2" podUID="87f78f88-02a9-4300-bfd6-d78b47321ed8" Jan 17 01:20:00.960717 containerd[1503]: time="2026-01-17T01:20:00.960160330Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:00.964524 containerd[1503]: time="2026-01-17T01:20:00.963175204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:20:00.964524 containerd[1503]: time="2026-01-17T01:20:00.963225360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:00.964711 kubelet[2689]: E0117 01:20:00.963670 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:00.964711 kubelet[2689]: E0117 01:20:00.963753 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:00.965194 containerd[1503]: time="2026-01-17T01:20:00.965157819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:20:00.965850 kubelet[2689]: E0117 01:20:00.965781 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b97d9fcf7-9dv76_calico-apiserver(94302249-2c45-4fa9-a8ed-6356d7141062): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:00.966152 kubelet[2689]: E0117 01:20:00.966078 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" podUID="94302249-2c45-4fa9-a8ed-6356d7141062" Jan 17 01:20:01.083906 systemd-networkd[1426]: vxlan.calico: Link UP Jan 17 01:20:01.083919 systemd-networkd[1426]: vxlan.calico: Gained carrier Jan 17 01:20:01.136017 containerd[1503]: time="2026-01-17T01:20:01.132466190Z" level=info msg="StopPodSandbox for \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\"" Jan 17 01:20:01.136017 containerd[1503]: time="2026-01-17T01:20:01.134031537Z" level=info msg="StopPodSandbox for \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\"" Jan 17 01:20:01.136017 containerd[1503]: time="2026-01-17T01:20:01.135996257Z" level=info msg="StopPodSandbox for \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\"" Jan 17 01:20:01.141556 containerd[1503]: time="2026-01-17T01:20:01.140406815Z" level=info msg="StopPodSandbox for \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\"" Jan 17 01:20:01.299508 containerd[1503]: time="2026-01-17T01:20:01.299446838Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:01.302393 containerd[1503]: time="2026-01-17T01:20:01.302347245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:20:01.302654 containerd[1503]: time="2026-01-17T01:20:01.302588271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:01.303033 kubelet[2689]: E0117 01:20:01.302968 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:01.305460 kubelet[2689]: E0117 01:20:01.303051 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:01.305460 kubelet[2689]: E0117 01:20:01.303165 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559fb66ff-gwd8z_calico-apiserver(3996ebc6-9eb5-4686-84b4-4c62f64f0ca5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:01.305460 kubelet[2689]: E0117 01:20:01.303216 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" podUID="3996ebc6-9eb5-4686-84b4-4c62f64f0ca5" Jan 17 01:20:01.555633 systemd-networkd[1426]: cali7df1a56cbe6: Gained IPv6LL Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.402 [INFO][4606] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.403 [INFO][4606] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" iface="eth0" netns="/var/run/netns/cni-1eba0814-4d52-11bd-8e47-f729d97c18b2" Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.404 [INFO][4606] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" iface="eth0" netns="/var/run/netns/cni-1eba0814-4d52-11bd-8e47-f729d97c18b2" Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.408 [INFO][4606] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" iface="eth0" netns="/var/run/netns/cni-1eba0814-4d52-11bd-8e47-f729d97c18b2" Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.409 [INFO][4606] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.409 [INFO][4606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.588 [INFO][4640] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" HandleID="k8s-pod-network.38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.590 [INFO][4640] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.590 [INFO][4640] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.605 [WARNING][4640] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" HandleID="k8s-pod-network.38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.605 [INFO][4640] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" HandleID="k8s-pod-network.38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.610 [INFO][4640] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:01.623283 containerd[1503]: 2026-01-17 01:20:01.614 [INFO][4606] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:01.634961 containerd[1503]: time="2026-01-17T01:20:01.629886171Z" level=info msg="TearDown network for sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\" successfully" Jan 17 01:20:01.634961 containerd[1503]: time="2026-01-17T01:20:01.630047837Z" level=info msg="StopPodSandbox for \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\" returns successfully" Jan 17 01:20:01.635837 systemd[1]: run-netns-cni\x2d1eba0814\x2d4d52\x2d11bd\x2d8e47\x2df729d97c18b2.mount: Deactivated successfully. Jan 17 01:20:01.650004 containerd[1503]: time="2026-01-17T01:20:01.648405724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-trqpb,Uid:03cd3dbd-5e1b-4532-9c89-eb080f7c53df,Namespace:calico-system,Attempt:1,}" Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.469 [INFO][4618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.469 [INFO][4618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" iface="eth0" netns="/var/run/netns/cni-948d0303-22ca-1b62-3328-9145fd8ff7a9" Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.469 [INFO][4618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" iface="eth0" netns="/var/run/netns/cni-948d0303-22ca-1b62-3328-9145fd8ff7a9" Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.471 [INFO][4618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" iface="eth0" netns="/var/run/netns/cni-948d0303-22ca-1b62-3328-9145fd8ff7a9" Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.471 [INFO][4618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.471 [INFO][4618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.594 [INFO][4650] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" HandleID="k8s-pod-network.37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.595 [INFO][4650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.610 [INFO][4650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.648 [WARNING][4650] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" HandleID="k8s-pod-network.37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.648 [INFO][4650] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" HandleID="k8s-pod-network.37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.652 [INFO][4650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:01.661884 containerd[1503]: 2026-01-17 01:20:01.657 [INFO][4618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:01.666579 containerd[1503]: time="2026-01-17T01:20:01.662843977Z" level=info msg="TearDown network for sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\" successfully" Jan 17 01:20:01.666579 containerd[1503]: time="2026-01-17T01:20:01.662906567Z" level=info msg="StopPodSandbox for \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\" returns successfully" Jan 17 01:20:01.671907 systemd[1]: run-netns-cni\x2d948d0303\x2d22ca\x2d1b62\x2d3328\x2d9145fd8ff7a9.mount: Deactivated successfully. Jan 17 01:20:01.678529 containerd[1503]: time="2026-01-17T01:20:01.677851015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b97d9fcf7-cbr65,Uid:e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0,Namespace:calico-apiserver,Attempt:1,}" Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.442 [INFO][4607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.444 [INFO][4607] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" iface="eth0" netns="/var/run/netns/cni-55d78dcb-1582-0d9d-892a-f72421da7928" Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.445 [INFO][4607] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" iface="eth0" netns="/var/run/netns/cni-55d78dcb-1582-0d9d-892a-f72421da7928" Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.446 [INFO][4607] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" iface="eth0" netns="/var/run/netns/cni-55d78dcb-1582-0d9d-892a-f72421da7928" Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.446 [INFO][4607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.446 [INFO][4607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.606 [INFO][4645] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" HandleID="k8s-pod-network.4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.609 [INFO][4645] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.653 [INFO][4645] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.673 [WARNING][4645] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" HandleID="k8s-pod-network.4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.673 [INFO][4645] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" HandleID="k8s-pod-network.4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.678 [INFO][4645] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:01.687997 containerd[1503]: 2026-01-17 01:20:01.683 [INFO][4607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:01.690977 containerd[1503]: time="2026-01-17T01:20:01.688182096Z" level=info msg="TearDown network for sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\" successfully" Jan 17 01:20:01.690977 containerd[1503]: time="2026-01-17T01:20:01.688215737Z" level=info msg="StopPodSandbox for \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\" returns successfully" Jan 17 01:20:01.698394 systemd[1]: run-netns-cni\x2d55d78dcb\x2d1582\x2d0d9d\x2d892a\x2df72421da7928.mount: Deactivated successfully. Jan 17 01:20:01.709907 containerd[1503]: time="2026-01-17T01:20:01.705085293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zj4mv,Uid:569a36a0-46e6-4752-8b8f-005d85b2712f,Namespace:calico-system,Attempt:1,}" Jan 17 01:20:01.737233 kubelet[2689]: E0117 01:20:01.736259 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" podUID="94302249-2c45-4fa9-a8ed-6356d7141062" Jan 17 01:20:01.737233 kubelet[2689]: E0117 01:20:01.737164 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" podUID="3996ebc6-9eb5-4686-84b4-4c62f64f0ca5" Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.482 [INFO][4620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.482 [INFO][4620] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" iface="eth0" netns="/var/run/netns/cni-367eaffa-3aa6-7c87-d62a-6f4b5a840142" Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.485 [INFO][4620] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" iface="eth0" netns="/var/run/netns/cni-367eaffa-3aa6-7c87-d62a-6f4b5a840142" Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.485 [INFO][4620] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" iface="eth0" netns="/var/run/netns/cni-367eaffa-3aa6-7c87-d62a-6f4b5a840142" Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.485 [INFO][4620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.485 [INFO][4620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.617 [INFO][4653] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" HandleID="k8s-pod-network.230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.618 [INFO][4653] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.678 [INFO][4653] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.709 [WARNING][4653] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" HandleID="k8s-pod-network.230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.709 [INFO][4653] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" HandleID="k8s-pod-network.230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.719 [INFO][4653] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:01.774653 containerd[1503]: 2026-01-17 01:20:01.750 [INFO][4620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:01.782466 containerd[1503]: time="2026-01-17T01:20:01.782126392Z" level=info msg="TearDown network for sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\" successfully" Jan 17 01:20:01.782466 containerd[1503]: time="2026-01-17T01:20:01.782170788Z" level=info msg="StopPodSandbox for \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\" returns successfully" Jan 17 01:20:01.803311 containerd[1503]: time="2026-01-17T01:20:01.802846401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fj97p,Uid:612dd22b-6369-4c99-85b1-59da9f6da310,Namespace:kube-system,Attempt:1,}" Jan 17 01:20:01.937662 systemd[1]: run-netns-cni\x2d367eaffa\x2d3aa6\x2d7c87\x2dd62a\x2d6f4b5a840142.mount: Deactivated successfully. Jan 17 01:20:01.939087 systemd-networkd[1426]: cali7a0a6f3c10f: Gained IPv6LL Jan 17 01:20:02.282553 systemd-networkd[1426]: cali4fae8d877b0: Link UP Jan 17 01:20:02.288351 systemd-networkd[1426]: cali4fae8d877b0: Gained carrier Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:01.882 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0 goldmane-7c778bb748- calico-system 03cd3dbd-5e1b-4532-9c89-eb080f7c53df 1047 0 2026-01-17 01:19:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-3374x.gb1.brightbox.com goldmane-7c778bb748-trqpb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4fae8d877b0 [] [] }} ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Namespace="calico-system" Pod="goldmane-7c778bb748-trqpb" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:01.885 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Namespace="calico-system" Pod="goldmane-7c778bb748-trqpb" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.048 [INFO][4730] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" HandleID="k8s-pod-network.88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.048 [INFO][4730] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" HandleID="k8s-pod-network.88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5930), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3374x.gb1.brightbox.com", "pod":"goldmane-7c778bb748-trqpb", "timestamp":"2026-01-17 01:20:02.048746162 +0000 UTC"}, Hostname:"srv-3374x.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.049 [INFO][4730] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.049 [INFO][4730] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.049 [INFO][4730] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3374x.gb1.brightbox.com' Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.090 [INFO][4730] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.132 [INFO][4730] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.186 [INFO][4730] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.192 [INFO][4730] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.215 [INFO][4730] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.220 [INFO][4730] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.228 [INFO][4730] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.246 [INFO][4730] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.261 [INFO][4730] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.133/26] block=192.168.2.128/26 handle="k8s-pod-network.88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.261 [INFO][4730] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.133/26] handle="k8s-pod-network.88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.263 [INFO][4730] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:02.315597 containerd[1503]: 2026-01-17 01:20:02.264 [INFO][4730] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.133/26] IPv6=[] ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" HandleID="k8s-pod-network.88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:02.324356 containerd[1503]: 2026-01-17 01:20:02.273 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Namespace="calico-system" Pod="goldmane-7c778bb748-trqpb" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"03cd3dbd-5e1b-4532-9c89-eb080f7c53df", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-7c778bb748-trqpb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fae8d877b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:02.324356 containerd[1503]: 2026-01-17 01:20:02.274 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.133/32] ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Namespace="calico-system" Pod="goldmane-7c778bb748-trqpb" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:02.324356 containerd[1503]: 2026-01-17 01:20:02.274 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fae8d877b0 ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Namespace="calico-system" Pod="goldmane-7c778bb748-trqpb" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:02.324356 containerd[1503]: 2026-01-17 01:20:02.288 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Namespace="calico-system" Pod="goldmane-7c778bb748-trqpb" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:02.324356 containerd[1503]: 2026-01-17 01:20:02.289 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Namespace="calico-system" Pod="goldmane-7c778bb748-trqpb" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"03cd3dbd-5e1b-4532-9c89-eb080f7c53df", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c", Pod:"goldmane-7c778bb748-trqpb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fae8d877b0", MAC:"96:ed:dd:2c:d5:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:02.324356 containerd[1503]: 2026-01-17 01:20:02.312 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c" Namespace="calico-system" Pod="goldmane-7c778bb748-trqpb" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:02.319465 systemd-networkd[1426]: vxlan.calico: Gained IPv6LL Jan 17 01:20:02.374073 containerd[1503]: time="2026-01-17T01:20:02.373250600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:20:02.374073 containerd[1503]: time="2026-01-17T01:20:02.373490256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:20:02.374073 containerd[1503]: time="2026-01-17T01:20:02.373550136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:02.374073 containerd[1503]: time="2026-01-17T01:20:02.373726104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:02.417684 systemd-networkd[1426]: cali5ba5337a59a: Link UP Jan 17 01:20:02.419875 systemd-networkd[1426]: cali5ba5337a59a: Gained carrier Jan 17 01:20:02.462517 systemd[1]: Started cri-containerd-88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c.scope - libcontainer container 88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c. Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:01.970 [INFO][4675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0 calico-apiserver-6b97d9fcf7- calico-apiserver e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0 1049 0 2026-01-17 01:19:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b97d9fcf7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-3374x.gb1.brightbox.com calico-apiserver-6b97d9fcf7-cbr65 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ba5337a59a [] [] }} ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-cbr65" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:01.974 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-cbr65" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.220 [INFO][4741] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" HandleID="k8s-pod-network.04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.221 [INFO][4741] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" HandleID="k8s-pod-network.04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000383f40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-3374x.gb1.brightbox.com", "pod":"calico-apiserver-6b97d9fcf7-cbr65", "timestamp":"2026-01-17 01:20:02.220834781 +0000 UTC"}, Hostname:"srv-3374x.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.222 [INFO][4741] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.261 [INFO][4741] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.261 [INFO][4741] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3374x.gb1.brightbox.com' Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.297 [INFO][4741] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.316 [INFO][4741] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.334 [INFO][4741] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.338 [INFO][4741] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.344 [INFO][4741] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.344 [INFO][4741] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.359 [INFO][4741] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.381 [INFO][4741] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.401 [INFO][4741] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.134/26] block=192.168.2.128/26 handle="k8s-pod-network.04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.402 [INFO][4741] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.134/26] handle="k8s-pod-network.04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.402 [INFO][4741] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:02.479601 containerd[1503]: 2026-01-17 01:20:02.402 [INFO][4741] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.134/26] IPv6=[] ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" HandleID="k8s-pod-network.04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:02.482961 containerd[1503]: 2026-01-17 01:20:02.405 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-cbr65" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0", GenerateName:"calico-apiserver-6b97d9fcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b97d9fcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6b97d9fcf7-cbr65", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ba5337a59a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:02.482961 containerd[1503]: 2026-01-17 01:20:02.406 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.134/32] ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-cbr65" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:02.482961 containerd[1503]: 2026-01-17 01:20:02.406 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ba5337a59a ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-cbr65" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:02.482961 containerd[1503]: 2026-01-17 01:20:02.421 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-cbr65" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:02.482961 containerd[1503]: 2026-01-17 01:20:02.425 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-cbr65" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0", GenerateName:"calico-apiserver-6b97d9fcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b97d9fcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb", Pod:"calico-apiserver-6b97d9fcf7-cbr65", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ba5337a59a", MAC:"c6:b6:6a:9f:18:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:02.482961 containerd[1503]: 2026-01-17 01:20:02.467 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb" Namespace="calico-apiserver" Pod="calico-apiserver-6b97d9fcf7-cbr65" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:02.548201 systemd-networkd[1426]: cali3d959892152: Link UP Jan 17 01:20:02.549792 systemd-networkd[1426]: cali3d959892152: Gained carrier Jan 17 01:20:02.578592 containerd[1503]: time="2026-01-17T01:20:02.577674722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:20:02.578592 containerd[1503]: time="2026-01-17T01:20:02.577756495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:20:02.578592 containerd[1503]: time="2026-01-17T01:20:02.577774667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:02.578592 containerd[1503]: time="2026-01-17T01:20:02.577894129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:01.985 [INFO][4699] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0 csi-node-driver- calico-system 569a36a0-46e6-4752-8b8f-005d85b2712f 1048 0 2026-01-17 01:19:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-3374x.gb1.brightbox.com csi-node-driver-zj4mv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3d959892152 [] [] }} ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Namespace="calico-system" Pod="csi-node-driver-zj4mv" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:01.989 [INFO][4699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Namespace="calico-system" Pod="csi-node-driver-zj4mv" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.235 [INFO][4750] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" HandleID="k8s-pod-network.33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.235 [INFO][4750] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" HandleID="k8s-pod-network.33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3374x.gb1.brightbox.com", "pod":"csi-node-driver-zj4mv", "timestamp":"2026-01-17 01:20:02.235756414 +0000 UTC"}, Hostname:"srv-3374x.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.236 [INFO][4750] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.404 [INFO][4750] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.404 [INFO][4750] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3374x.gb1.brightbox.com' Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.453 [INFO][4750] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.479 [INFO][4750] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.490 [INFO][4750] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.494 [INFO][4750] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.498 [INFO][4750] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.499 [INFO][4750] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.501 [INFO][4750] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.511 [INFO][4750] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.527 [INFO][4750] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.135/26] block=192.168.2.128/26 handle="k8s-pod-network.33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.528 [INFO][4750] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.135/26] handle="k8s-pod-network.33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.528 [INFO][4750] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:02.617807 containerd[1503]: 2026-01-17 01:20:02.528 [INFO][4750] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.135/26] IPv6=[] ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" HandleID="k8s-pod-network.33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:02.620844 containerd[1503]: 2026-01-17 01:20:02.534 [INFO][4699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Namespace="calico-system" Pod="csi-node-driver-zj4mv" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"569a36a0-46e6-4752-8b8f-005d85b2712f", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-zj4mv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d959892152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:02.620844 containerd[1503]: 2026-01-17 01:20:02.535 [INFO][4699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.135/32] ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Namespace="calico-system" Pod="csi-node-driver-zj4mv" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:02.620844 containerd[1503]: 2026-01-17 01:20:02.535 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d959892152 ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Namespace="calico-system" Pod="csi-node-driver-zj4mv" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:02.620844 containerd[1503]: 2026-01-17 01:20:02.552 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Namespace="calico-system" Pod="csi-node-driver-zj4mv" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:02.620844 containerd[1503]: 2026-01-17 01:20:02.561 [INFO][4699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Namespace="calico-system" Pod="csi-node-driver-zj4mv" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"569a36a0-46e6-4752-8b8f-005d85b2712f", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a", Pod:"csi-node-driver-zj4mv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d959892152", MAC:"6e:9a:a4:de:38:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:02.620844 containerd[1503]: 2026-01-17 01:20:02.606 [INFO][4699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a" Namespace="calico-system" Pod="csi-node-driver-zj4mv" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:02.627382 systemd[1]: Started cri-containerd-04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb.scope - libcontainer container 04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb. Jan 17 01:20:02.692430 containerd[1503]: time="2026-01-17T01:20:02.690684054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:20:02.692430 containerd[1503]: time="2026-01-17T01:20:02.690793809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:20:02.692430 containerd[1503]: time="2026-01-17T01:20:02.690813714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:02.692430 containerd[1503]: time="2026-01-17T01:20:02.690961551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:02.692685 systemd-networkd[1426]: calie06a89415a8: Link UP Jan 17 01:20:02.703918 systemd-networkd[1426]: calie06a89415a8: Gained carrier Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.004 [INFO][4713] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0 coredns-66bc5c9577- kube-system 612dd22b-6369-4c99-85b1-59da9f6da310 1050 0 2026-01-17 01:19:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-3374x.gb1.brightbox.com coredns-66bc5c9577-fj97p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie06a89415a8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Namespace="kube-system" Pod="coredns-66bc5c9577-fj97p" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.007 [INFO][4713] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Namespace="kube-system" Pod="coredns-66bc5c9577-fj97p" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.256 [INFO][4754] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" HandleID="k8s-pod-network.d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.257 [INFO][4754] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" HandleID="k8s-pod-network.d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001224d0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-3374x.gb1.brightbox.com", "pod":"coredns-66bc5c9577-fj97p", "timestamp":"2026-01-17 01:20:02.256129424 +0000 UTC"}, Hostname:"srv-3374x.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.257 [INFO][4754] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.528 [INFO][4754] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.528 [INFO][4754] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3374x.gb1.brightbox.com' Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.551 [INFO][4754] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.581 [INFO][4754] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.590 [INFO][4754] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.595 [INFO][4754] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.610 [INFO][4754] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.610 [INFO][4754] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.626 [INFO][4754] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4 Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.654 [INFO][4754] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.668 [INFO][4754] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.136/26] block=192.168.2.128/26 handle="k8s-pod-network.d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.668 [INFO][4754] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.136/26] handle="k8s-pod-network.d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.669 [INFO][4754] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:02.760051 containerd[1503]: 2026-01-17 01:20:02.669 [INFO][4754] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.136/26] IPv6=[] ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" HandleID="k8s-pod-network.d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:02.762573 containerd[1503]: 2026-01-17 01:20:02.682 [INFO][4713] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Namespace="kube-system" Pod="coredns-66bc5c9577-fj97p" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"612dd22b-6369-4c99-85b1-59da9f6da310", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-fj97p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie06a89415a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:02.762573 containerd[1503]: 2026-01-17 01:20:02.682 [INFO][4713] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.136/32] ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Namespace="kube-system" Pod="coredns-66bc5c9577-fj97p" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:02.762573 containerd[1503]: 2026-01-17 01:20:02.684 [INFO][4713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie06a89415a8 ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Namespace="kube-system" Pod="coredns-66bc5c9577-fj97p" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:02.762573 containerd[1503]: 2026-01-17 01:20:02.706 [INFO][4713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Namespace="kube-system" Pod="coredns-66bc5c9577-fj97p" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:02.762573 containerd[1503]: 2026-01-17 01:20:02.714 [INFO][4713] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Namespace="kube-system" Pod="coredns-66bc5c9577-fj97p" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"612dd22b-6369-4c99-85b1-59da9f6da310", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4", Pod:"coredns-66bc5c9577-fj97p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie06a89415a8", MAC:"2a:9f:b1:83:8f:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:02.762925 containerd[1503]: 2026-01-17 01:20:02.738 [INFO][4713] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4" Namespace="kube-system" Pod="coredns-66bc5c9577-fj97p" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:02.782184 containerd[1503]: time="2026-01-17T01:20:02.781696991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-trqpb,Uid:03cd3dbd-5e1b-4532-9c89-eb080f7c53df,Namespace:calico-system,Attempt:1,} returns sandbox id \"88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c\"" Jan 17 01:20:02.793258 containerd[1503]: time="2026-01-17T01:20:02.791638729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 01:20:02.795837 systemd[1]: Started cri-containerd-33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a.scope - libcontainer container 33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a. Jan 17 01:20:02.858164 containerd[1503]: time="2026-01-17T01:20:02.857207144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:20:02.858164 containerd[1503]: time="2026-01-17T01:20:02.857669950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:20:02.858164 containerd[1503]: time="2026-01-17T01:20:02.857704301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:02.862824 containerd[1503]: time="2026-01-17T01:20:02.861964264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:02.880535 containerd[1503]: time="2026-01-17T01:20:02.879326146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b97d9fcf7-cbr65,Uid:e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb\"" Jan 17 01:20:02.908513 systemd[1]: Started cri-containerd-d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4.scope - libcontainer container d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4. Jan 17 01:20:02.914482 containerd[1503]: time="2026-01-17T01:20:02.914432207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zj4mv,Uid:569a36a0-46e6-4752-8b8f-005d85b2712f,Namespace:calico-system,Attempt:1,} returns sandbox id \"33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a\"" Jan 17 01:20:02.985402 containerd[1503]: time="2026-01-17T01:20:02.984590467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fj97p,Uid:612dd22b-6369-4c99-85b1-59da9f6da310,Namespace:kube-system,Attempt:1,} returns sandbox id \"d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4\"" Jan 17 01:20:02.998878 containerd[1503]: time="2026-01-17T01:20:02.998821012Z" level=info msg="CreateContainer within sandbox \"d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 01:20:03.027173 containerd[1503]: time="2026-01-17T01:20:03.027046295Z" level=info msg="CreateContainer within sandbox \"d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94d1b563b959cd8b3bb4d155cb0a43190bfac796167389db34b3fe0966a2c247\"" Jan 17 01:20:03.031413 containerd[1503]: time="2026-01-17T01:20:03.028445278Z" level=info msg="StartContainer for \"94d1b563b959cd8b3bb4d155cb0a43190bfac796167389db34b3fe0966a2c247\"" Jan 17 01:20:03.037687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753097925.mount: Deactivated successfully. Jan 17 01:20:03.081584 systemd[1]: Started cri-containerd-94d1b563b959cd8b3bb4d155cb0a43190bfac796167389db34b3fe0966a2c247.scope - libcontainer container 94d1b563b959cd8b3bb4d155cb0a43190bfac796167389db34b3fe0966a2c247. Jan 17 01:20:03.102652 containerd[1503]: time="2026-01-17T01:20:03.101185429Z" level=info msg="StopPodSandbox for \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\"" Jan 17 01:20:03.142815 containerd[1503]: time="2026-01-17T01:20:03.142401039Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:03.146234 containerd[1503]: time="2026-01-17T01:20:03.146152563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 01:20:03.147948 containerd[1503]: time="2026-01-17T01:20:03.146259406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:03.148428 kubelet[2689]: E0117 01:20:03.148378 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:20:03.149688 kubelet[2689]: E0117 01:20:03.148940 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:20:03.149688 kubelet[2689]: E0117 01:20:03.149182 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-trqpb_calico-system(03cd3dbd-5e1b-4532-9c89-eb080f7c53df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:03.149688 kubelet[2689]: E0117 01:20:03.149234 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df" Jan 17 01:20:03.150641 containerd[1503]: time="2026-01-17T01:20:03.150606360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:20:03.179901 containerd[1503]: time="2026-01-17T01:20:03.179601761Z" level=info msg="StartContainer for \"94d1b563b959cd8b3bb4d155cb0a43190bfac796167389db34b3fe0966a2c247\" returns successfully" Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.220 [WARNING][5035] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"569a36a0-46e6-4752-8b8f-005d85b2712f", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a", Pod:"csi-node-driver-zj4mv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d959892152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.220 [INFO][5035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.220 [INFO][5035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" iface="eth0" netns="" Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.220 [INFO][5035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.220 [INFO][5035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.274 [INFO][5050] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" HandleID="k8s-pod-network.4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.275 [INFO][5050] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.275 [INFO][5050] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.288 [WARNING][5050] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" HandleID="k8s-pod-network.4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.288 [INFO][5050] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" HandleID="k8s-pod-network.4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.290 [INFO][5050] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:03.296296 containerd[1503]: 2026-01-17 01:20:03.293 [INFO][5035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:03.297735 containerd[1503]: time="2026-01-17T01:20:03.296188793Z" level=info msg="TearDown network for sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\" successfully" Jan 17 01:20:03.297735 containerd[1503]: time="2026-01-17T01:20:03.296593045Z" level=info msg="StopPodSandbox for \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\" returns successfully" Jan 17 01:20:03.299815 containerd[1503]: time="2026-01-17T01:20:03.299234835Z" level=info msg="RemovePodSandbox for \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\"" Jan 17 01:20:03.299815 containerd[1503]: time="2026-01-17T01:20:03.299310596Z" level=info msg="Forcibly stopping sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\"" Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.357 [WARNING][5068] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"569a36a0-46e6-4752-8b8f-005d85b2712f", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"33630cc6d6aa2ac2780f1bfc947745fd85177f1a2eb519ace709822f1763930a", Pod:"csi-node-driver-zj4mv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d959892152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.357 [INFO][5068] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.357 [INFO][5068] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" iface="eth0" netns="" Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.357 [INFO][5068] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.357 [INFO][5068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.400 [INFO][5075] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" HandleID="k8s-pod-network.4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.400 [INFO][5075] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.400 [INFO][5075] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.413 [WARNING][5075] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" HandleID="k8s-pod-network.4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.413 [INFO][5075] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" HandleID="k8s-pod-network.4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Workload="srv--3374x.gb1.brightbox.com-k8s-csi--node--driver--zj4mv-eth0" Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.415 [INFO][5075] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:03.421031 containerd[1503]: 2026-01-17 01:20:03.418 [INFO][5068] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388" Jan 17 01:20:03.421031 containerd[1503]: time="2026-01-17T01:20:03.420982742Z" level=info msg="TearDown network for sandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\" successfully" Jan 17 01:20:03.460770 containerd[1503]: time="2026-01-17T01:20:03.460656733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:20:03.460956 containerd[1503]: time="2026-01-17T01:20:03.460793117Z" level=info msg="RemovePodSandbox \"4d8f1ef064c101c6ecc22ec55f4a06533851b998e3ba480dbbee90eb4217d388\" returns successfully" Jan 17 01:20:03.461844 containerd[1503]: time="2026-01-17T01:20:03.461796184Z" level=info msg="StopPodSandbox for \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\"" Jan 17 01:20:03.495730 containerd[1503]: time="2026-01-17T01:20:03.495649389Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:03.503987 containerd[1503]: time="2026-01-17T01:20:03.503822160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:20:03.504366 containerd[1503]: time="2026-01-17T01:20:03.504297772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:03.505298 kubelet[2689]: E0117 01:20:03.505102 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:03.505298 kubelet[2689]: E0117 01:20:03.505184 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:03.506229 kubelet[2689]: E0117 01:20:03.505661 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b97d9fcf7-cbr65_calico-apiserver(e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:03.506229 kubelet[2689]: E0117 01:20:03.505764 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:20:03.508075 containerd[1503]: time="2026-01-17T01:20:03.507651268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.524 [WARNING][5091] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"03cd3dbd-5e1b-4532-9c89-eb080f7c53df", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c", Pod:"goldmane-7c778bb748-trqpb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fae8d877b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.525 [INFO][5091] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.525 [INFO][5091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" iface="eth0" netns="" Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.525 [INFO][5091] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.525 [INFO][5091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.561 [INFO][5098] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" HandleID="k8s-pod-network.38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.561 [INFO][5098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.562 [INFO][5098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.578 [WARNING][5098] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" HandleID="k8s-pod-network.38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.578 [INFO][5098] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" HandleID="k8s-pod-network.38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.582 [INFO][5098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:03.592123 containerd[1503]: 2026-01-17 01:20:03.588 [INFO][5091] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:03.593391 containerd[1503]: time="2026-01-17T01:20:03.593168660Z" level=info msg="TearDown network for sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\" successfully" Jan 17 01:20:03.593391 containerd[1503]: time="2026-01-17T01:20:03.593205459Z" level=info msg="StopPodSandbox for \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\" returns successfully" Jan 17 01:20:03.595833 containerd[1503]: time="2026-01-17T01:20:03.595469804Z" level=info msg="RemovePodSandbox for \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\"" Jan 17 01:20:03.595833 containerd[1503]: time="2026-01-17T01:20:03.595509021Z" level=info msg="Forcibly stopping sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\"" Jan 17 01:20:03.599831 systemd-networkd[1426]: cali5ba5337a59a: Gained IPv6LL Jan 17 01:20:03.740776 kubelet[2689]: E0117 01:20:03.740116 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df" Jan 17 01:20:03.750080 kubelet[2689]: E0117 01:20:03.749745 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.694 [WARNING][5112] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"03cd3dbd-5e1b-4532-9c89-eb080f7c53df", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"88eced2b0dc26dfc92a6b669328bcd2c868bf06ac127e4fa67ca75b0af73b03c", Pod:"goldmane-7c778bb748-trqpb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fae8d877b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.695 [INFO][5112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.695 [INFO][5112] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" iface="eth0" netns="" Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.695 [INFO][5112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.695 [INFO][5112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.755 [INFO][5120] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" HandleID="k8s-pod-network.38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.756 [INFO][5120] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.757 [INFO][5120] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.782 [WARNING][5120] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" HandleID="k8s-pod-network.38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.782 [INFO][5120] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" HandleID="k8s-pod-network.38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Workload="srv--3374x.gb1.brightbox.com-k8s-goldmane--7c778bb748--trqpb-eth0" Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.784 [INFO][5120] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:03.789978 containerd[1503]: 2026-01-17 01:20:03.786 [INFO][5112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815" Jan 17 01:20:03.791127 containerd[1503]: time="2026-01-17T01:20:03.790363763Z" level=info msg="TearDown network for sandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\" successfully" Jan 17 01:20:03.796018 kubelet[2689]: I0117 01:20:03.794455 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fj97p" podStartSLOduration=54.794421367 podStartE2EDuration="54.794421367s" podCreationTimestamp="2026-01-17 01:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:20:03.793812604 +0000 UTC m=+61.004312509" watchObservedRunningTime="2026-01-17 01:20:03.794421367 +0000 UTC m=+61.004921253" Jan 17 01:20:03.798611 containerd[1503]: time="2026-01-17T01:20:03.798563773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:20:03.798936 containerd[1503]: time="2026-01-17T01:20:03.798889677Z" level=info msg="RemovePodSandbox \"38d6db189abc52e209fec5d226931fd28c1e5f0b4530fb56815a89947de43815\" returns successfully" Jan 17 01:20:03.800385 containerd[1503]: time="2026-01-17T01:20:03.800024638Z" level=info msg="StopPodSandbox for \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\"" Jan 17 01:20:03.830387 containerd[1503]: time="2026-01-17T01:20:03.830317502Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:03.834791 containerd[1503]: time="2026-01-17T01:20:03.833526956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 01:20:03.834791 containerd[1503]: time="2026-01-17T01:20:03.833632290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 01:20:03.834950 kubelet[2689]: E0117 01:20:03.833815 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:20:03.834950 kubelet[2689]: E0117 01:20:03.833872 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:20:03.834950 kubelet[2689]: E0117 01:20:03.834012 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zj4mv_calico-system(569a36a0-46e6-4752-8b8f-005d85b2712f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:03.835596 containerd[1503]: time="2026-01-17T01:20:03.835519203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.894 [WARNING][5136] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.895 [INFO][5136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.895 [INFO][5136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" iface="eth0" netns="" Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.895 [INFO][5136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.895 [INFO][5136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.931 [INFO][5145] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" HandleID="k8s-pod-network.447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.931 [INFO][5145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.931 [INFO][5145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.940 [WARNING][5145] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" HandleID="k8s-pod-network.447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.940 [INFO][5145] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" HandleID="k8s-pod-network.447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.945 [INFO][5145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:03.949823 containerd[1503]: 2026-01-17 01:20:03.947 [INFO][5136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:20:03.951133 containerd[1503]: time="2026-01-17T01:20:03.950608807Z" level=info msg="TearDown network for sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\" successfully" Jan 17 01:20:03.951133 containerd[1503]: time="2026-01-17T01:20:03.950671452Z" level=info msg="StopPodSandbox for \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\" returns successfully" Jan 17 01:20:03.952306 containerd[1503]: time="2026-01-17T01:20:03.951778446Z" level=info msg="RemovePodSandbox for \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\"" Jan 17 01:20:03.952306 containerd[1503]: time="2026-01-17T01:20:03.951813389Z" level=info msg="Forcibly stopping sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\"" Jan 17 01:20:03.983493 systemd-networkd[1426]: cali4fae8d877b0: Gained IPv6LL Jan 17 01:20:04.048619 systemd-networkd[1426]: calie06a89415a8: Gained IPv6LL Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.014 [WARNING][5160] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.014 [INFO][5160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.014 [INFO][5160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" iface="eth0" netns="" Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.014 [INFO][5160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.014 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.053 [INFO][5167] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" HandleID="k8s-pod-network.447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.054 [INFO][5167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.054 [INFO][5167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.062 [WARNING][5167] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" HandleID="k8s-pod-network.447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.063 [INFO][5167] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" HandleID="k8s-pod-network.447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Workload="srv--3374x.gb1.brightbox.com-k8s-whisker--5665f4c7d8--97hgh-eth0" Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.064 [INFO][5167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:04.069108 containerd[1503]: 2026-01-17 01:20:04.066 [INFO][5160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9" Jan 17 01:20:04.070802 containerd[1503]: time="2026-01-17T01:20:04.069991420Z" level=info msg="TearDown network for sandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\" successfully" Jan 17 01:20:04.075392 containerd[1503]: time="2026-01-17T01:20:04.075175964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:20:04.075392 containerd[1503]: time="2026-01-17T01:20:04.075324585Z" level=info msg="RemovePodSandbox \"447da7cb479bdbb0791c7227bdeee84f15f2ab0d6f5237af0387f86af59655d9\" returns successfully" Jan 17 01:20:04.076458 containerd[1503]: time="2026-01-17T01:20:04.076310637Z" level=info msg="StopPodSandbox for \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\"" Jan 17 01:20:04.149813 containerd[1503]: time="2026-01-17T01:20:04.149581413Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:04.150908 containerd[1503]: time="2026-01-17T01:20:04.150743275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 01:20:04.150908 containerd[1503]: time="2026-01-17T01:20:04.150840932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 01:20:04.151773 kubelet[2689]: E0117 01:20:04.151070 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:20:04.151773 kubelet[2689]: E0117 01:20:04.151139 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:20:04.151773 kubelet[2689]: E0117 01:20:04.151297 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zj4mv_calico-system(569a36a0-46e6-4752-8b8f-005d85b2712f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:04.152413 kubelet[2689]: E0117 01:20:04.151384 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.128 [WARNING][5181] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0", GenerateName:"calico-apiserver-6b97d9fcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b97d9fcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb", Pod:"calico-apiserver-6b97d9fcf7-cbr65", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ba5337a59a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.129 [INFO][5181] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.129 [INFO][5181] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" iface="eth0" netns="" Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.129 [INFO][5181] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.129 [INFO][5181] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.169 [INFO][5189] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" HandleID="k8s-pod-network.37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.169 [INFO][5189] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.169 [INFO][5189] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.180 [WARNING][5189] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" HandleID="k8s-pod-network.37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.180 [INFO][5189] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" HandleID="k8s-pod-network.37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.182 [INFO][5189] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:04.188110 containerd[1503]: 2026-01-17 01:20:04.184 [INFO][5181] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:04.188110 containerd[1503]: time="2026-01-17T01:20:04.187728842Z" level=info msg="TearDown network for sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\" successfully" Jan 17 01:20:04.188110 containerd[1503]: time="2026-01-17T01:20:04.187766302Z" level=info msg="StopPodSandbox for \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\" returns successfully" Jan 17 01:20:04.191324 containerd[1503]: time="2026-01-17T01:20:04.189587239Z" level=info msg="RemovePodSandbox for \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\"" Jan 17 01:20:04.191324 containerd[1503]: time="2026-01-17T01:20:04.189625180Z" level=info msg="Forcibly stopping sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\"" Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.260 [WARNING][5204] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0", GenerateName:"calico-apiserver-6b97d9fcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b97d9fcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"04aa51502bc16b0927ff4a809114285da16d79e73107f5d45ea328e0d2829fbb", Pod:"calico-apiserver-6b97d9fcf7-cbr65", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ba5337a59a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.261 [INFO][5204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.261 [INFO][5204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" iface="eth0" netns="" Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.261 [INFO][5204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.261 [INFO][5204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.296 [INFO][5211] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" HandleID="k8s-pod-network.37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.297 [INFO][5211] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.297 [INFO][5211] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.306 [WARNING][5211] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" HandleID="k8s-pod-network.37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.306 [INFO][5211] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" HandleID="k8s-pod-network.37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--cbr65-eth0" Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.308 [INFO][5211] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:04.313594 containerd[1503]: 2026-01-17 01:20:04.311 [INFO][5204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef" Jan 17 01:20:04.316104 containerd[1503]: time="2026-01-17T01:20:04.314235744Z" level=info msg="TearDown network for sandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\" successfully" Jan 17 01:20:04.322214 containerd[1503]: time="2026-01-17T01:20:04.321975946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:20:04.322214 containerd[1503]: time="2026-01-17T01:20:04.322077979Z" level=info msg="RemovePodSandbox \"37274a8f7716269d62d891a9cd79125e8d0589f1b235a0d39c1625b666c24fef\" returns successfully" Jan 17 01:20:04.324769 containerd[1503]: time="2026-01-17T01:20:04.324734646Z" level=info msg="StopPodSandbox for \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\"" Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.376 [WARNING][5225] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0", GenerateName:"calico-kube-controllers-796c9cbbf8-", Namespace:"calico-system", SelfLink:"", UID:"722a4a78-bbcc-4e35-a380-cd81c1aedcd6", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796c9cbbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16", Pod:"calico-kube-controllers-796c9cbbf8-tx6d5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c5137f3eba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.376 [INFO][5225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.376 [INFO][5225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" iface="eth0" netns="" Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.376 [INFO][5225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.376 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.410 [INFO][5232] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" HandleID="k8s-pod-network.4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.411 [INFO][5232] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.411 [INFO][5232] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.420 [WARNING][5232] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" HandleID="k8s-pod-network.4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.420 [INFO][5232] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" HandleID="k8s-pod-network.4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.422 [INFO][5232] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:04.427222 containerd[1503]: 2026-01-17 01:20:04.424 [INFO][5225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:20:04.427222 containerd[1503]: time="2026-01-17T01:20:04.427038422Z" level=info msg="TearDown network for sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\" successfully" Jan 17 01:20:04.427222 containerd[1503]: time="2026-01-17T01:20:04.427076378Z" level=info msg="StopPodSandbox for \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\" returns successfully" Jan 17 01:20:04.428669 containerd[1503]: time="2026-01-17T01:20:04.427834447Z" level=info msg="RemovePodSandbox for \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\"" Jan 17 01:20:04.428669 containerd[1503]: time="2026-01-17T01:20:04.427875111Z" level=info msg="Forcibly stopping sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\"" Jan 17 01:20:04.431654 systemd-networkd[1426]: cali3d959892152: Gained IPv6LL Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.477 [WARNING][5246] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0", GenerateName:"calico-kube-controllers-796c9cbbf8-", Namespace:"calico-system", SelfLink:"", UID:"722a4a78-bbcc-4e35-a380-cd81c1aedcd6", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796c9cbbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"2baa48efecedc885e64e8ac13ba097e80c30daef1849cb8ad5b510d3b94a6b16", Pod:"calico-kube-controllers-796c9cbbf8-tx6d5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0c5137f3eba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.478 [INFO][5246] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.478 [INFO][5246] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" iface="eth0" netns="" Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.478 [INFO][5246] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.478 [INFO][5246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.517 [INFO][5253] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" HandleID="k8s-pod-network.4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.518 [INFO][5253] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.518 [INFO][5253] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.526 [WARNING][5253] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" HandleID="k8s-pod-network.4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.527 [INFO][5253] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" HandleID="k8s-pod-network.4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--kube--controllers--796c9cbbf8--tx6d5-eth0" Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.528 [INFO][5253] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:04.533438 containerd[1503]: 2026-01-17 01:20:04.531 [INFO][5246] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49" Jan 17 01:20:04.534122 containerd[1503]: time="2026-01-17T01:20:04.533521182Z" level=info msg="TearDown network for sandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\" successfully" Jan 17 01:20:04.537680 containerd[1503]: time="2026-01-17T01:20:04.537600276Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:20:04.537778 containerd[1503]: time="2026-01-17T01:20:04.537731994Z" level=info msg="RemovePodSandbox \"4d65ae76a793f865d0a7cec131eb4ba1770303876b6f30e6b2d2ae8779132d49\" returns successfully" Jan 17 01:20:04.538973 containerd[1503]: time="2026-01-17T01:20:04.538476368Z" level=info msg="StopPodSandbox for \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\"" Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.588 [WARNING][5267] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0", GenerateName:"calico-apiserver-8559fb66ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"3996ebc6-9eb5-4686-84b4-4c62f64f0ca5", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8559fb66ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e", Pod:"calico-apiserver-8559fb66ff-gwd8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a0a6f3c10f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.588 [INFO][5267] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.588 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" iface="eth0" netns="" Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.588 [INFO][5267] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.589 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.623 [INFO][5274] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" HandleID="k8s-pod-network.51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.623 [INFO][5274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.623 [INFO][5274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.635 [WARNING][5274] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" HandleID="k8s-pod-network.51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.635 [INFO][5274] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" HandleID="k8s-pod-network.51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.638 [INFO][5274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:04.643202 containerd[1503]: 2026-01-17 01:20:04.640 [INFO][5267] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:20:04.646296 containerd[1503]: time="2026-01-17T01:20:04.643565627Z" level=info msg="TearDown network for sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\" successfully" Jan 17 01:20:04.646296 containerd[1503]: time="2026-01-17T01:20:04.643609150Z" level=info msg="StopPodSandbox for \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\" returns successfully" Jan 17 01:20:04.647154 containerd[1503]: time="2026-01-17T01:20:04.646756687Z" level=info msg="RemovePodSandbox for \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\"" Jan 17 01:20:04.647154 containerd[1503]: time="2026-01-17T01:20:04.646879953Z" level=info msg="Forcibly stopping sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\"" Jan 17 01:20:04.767768 kubelet[2689]: E0117 01:20:04.767564 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:20:04.779381 kubelet[2689]: E0117 01:20:04.778493 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df" Jan 17 01:20:04.784650 kubelet[2689]: E0117 01:20:04.783615 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.699 [WARNING][5289] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0", GenerateName:"calico-apiserver-8559fb66ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"3996ebc6-9eb5-4686-84b4-4c62f64f0ca5", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8559fb66ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"5c058482a5f4d457c0a7b08c43160cbd62452241d9f5eba2a1ad497ebf5f266e", Pod:"calico-apiserver-8559fb66ff-gwd8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a0a6f3c10f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.700 [INFO][5289] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.700 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" iface="eth0" netns="" Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.700 [INFO][5289] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.700 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.754 [INFO][5297] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" HandleID="k8s-pod-network.51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.754 [INFO][5297] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.754 [INFO][5297] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.781 [WARNING][5297] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" HandleID="k8s-pod-network.51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.781 [INFO][5297] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" HandleID="k8s-pod-network.51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--8559fb66ff--gwd8z-eth0" Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.786 [INFO][5297] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:04.794569 containerd[1503]: 2026-01-17 01:20:04.790 [INFO][5289] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c" Jan 17 01:20:04.795245 containerd[1503]: time="2026-01-17T01:20:04.794845809Z" level=info msg="TearDown network for sandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\" successfully" Jan 17 01:20:04.802369 containerd[1503]: time="2026-01-17T01:20:04.801512511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:20:04.802369 containerd[1503]: time="2026-01-17T01:20:04.801596010Z" level=info msg="RemovePodSandbox \"51bff72ceb2abbc5a8b9bc312c7d613d1d8107333693d1a19ccf4915fc20629c\" returns successfully" Jan 17 01:20:04.803532 containerd[1503]: time="2026-01-17T01:20:04.803277075Z" level=info msg="StopPodSandbox for \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\"" Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.882 [WARNING][5311] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"612dd22b-6369-4c99-85b1-59da9f6da310", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4", Pod:"coredns-66bc5c9577-fj97p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie06a89415a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.883 [INFO][5311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.883 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" iface="eth0" netns="" Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.883 [INFO][5311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.883 [INFO][5311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.916 [INFO][5318] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" HandleID="k8s-pod-network.230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.917 [INFO][5318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.917 [INFO][5318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.926 [WARNING][5318] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" HandleID="k8s-pod-network.230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.926 [INFO][5318] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" HandleID="k8s-pod-network.230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.928 [INFO][5318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:04.932978 containerd[1503]: 2026-01-17 01:20:04.930 [INFO][5311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:04.934015 containerd[1503]: time="2026-01-17T01:20:04.933814452Z" level=info msg="TearDown network for sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\" successfully" Jan 17 01:20:04.934015 containerd[1503]: time="2026-01-17T01:20:04.933856440Z" level=info msg="StopPodSandbox for \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\" returns successfully" Jan 17 01:20:04.936395 containerd[1503]: time="2026-01-17T01:20:04.936237842Z" level=info msg="RemovePodSandbox for \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\"" Jan 17 01:20:04.936725 containerd[1503]: time="2026-01-17T01:20:04.936569002Z" level=info msg="Forcibly stopping sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\"" Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:04.993 [WARNING][5332] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"612dd22b-6369-4c99-85b1-59da9f6da310", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"d0d5ec427b6a594aa04812fe861fab0155a7b689572dbe428883be0cd5bd12c4", Pod:"coredns-66bc5c9577-fj97p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie06a89415a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:04.993 [INFO][5332] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:04.993 [INFO][5332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" iface="eth0" netns="" Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:04.993 [INFO][5332] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:04.993 [INFO][5332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:05.028 [INFO][5341] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" HandleID="k8s-pod-network.230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:05.028 [INFO][5341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:05.029 [INFO][5341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:05.039 [WARNING][5341] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" HandleID="k8s-pod-network.230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:05.039 [INFO][5341] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" HandleID="k8s-pod-network.230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--fj97p-eth0" Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:05.042 [INFO][5341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:05.046966 containerd[1503]: 2026-01-17 01:20:05.044 [INFO][5332] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742" Jan 17 01:20:05.048196 containerd[1503]: time="2026-01-17T01:20:05.047315035Z" level=info msg="TearDown network for sandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\" successfully" Jan 17 01:20:05.052708 containerd[1503]: time="2026-01-17T01:20:05.052301671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:20:05.052708 containerd[1503]: time="2026-01-17T01:20:05.052417918Z" level=info msg="RemovePodSandbox \"230edaf3c08faa0227889dad72f03f1e885a60dd60336795f44bfd4dfa447742\" returns successfully" Jan 17 01:20:05.053595 containerd[1503]: time="2026-01-17T01:20:05.053521924Z" level=info msg="StopPodSandbox for \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\"" Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.106 [WARNING][5356] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0", GenerateName:"calico-apiserver-6b97d9fcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"94302249-2c45-4fa9-a8ed-6356d7141062", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b97d9fcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b", Pod:"calico-apiserver-6b97d9fcf7-9dv76", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7df1a56cbe6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.106 [INFO][5356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.106 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" iface="eth0" netns="" Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.106 [INFO][5356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.106 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.139 [INFO][5363] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" HandleID="k8s-pod-network.2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.140 [INFO][5363] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.140 [INFO][5363] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.149 [WARNING][5363] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" HandleID="k8s-pod-network.2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.149 [INFO][5363] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" HandleID="k8s-pod-network.2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.151 [INFO][5363] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:05.156349 containerd[1503]: 2026-01-17 01:20:05.154 [INFO][5356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:20:05.157999 containerd[1503]: time="2026-01-17T01:20:05.157486363Z" level=info msg="TearDown network for sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\" successfully" Jan 17 01:20:05.157999 containerd[1503]: time="2026-01-17T01:20:05.157534592Z" level=info msg="StopPodSandbox for \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\" returns successfully" Jan 17 01:20:05.158299 containerd[1503]: time="2026-01-17T01:20:05.158239857Z" level=info msg="RemovePodSandbox for \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\"" Jan 17 01:20:05.158592 containerd[1503]: time="2026-01-17T01:20:05.158392433Z" level=info msg="Forcibly stopping sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\"" Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.220 [WARNING][5377] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0", GenerateName:"calico-apiserver-6b97d9fcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"94302249-2c45-4fa9-a8ed-6356d7141062", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b97d9fcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"c8e83d89c74767051d6a9a6634b04409f76ec49dda4bc468b37be83bddb9a12b", Pod:"calico-apiserver-6b97d9fcf7-9dv76", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7df1a56cbe6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.220 [INFO][5377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.220 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" iface="eth0" netns="" Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.220 [INFO][5377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.220 [INFO][5377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.255 [INFO][5384] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" HandleID="k8s-pod-network.2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.255 [INFO][5384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.255 [INFO][5384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.266 [WARNING][5384] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" HandleID="k8s-pod-network.2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.266 [INFO][5384] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" HandleID="k8s-pod-network.2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Workload="srv--3374x.gb1.brightbox.com-k8s-calico--apiserver--6b97d9fcf7--9dv76-eth0" Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.268 [INFO][5384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:05.275234 containerd[1503]: 2026-01-17 01:20:05.270 [INFO][5377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134" Jan 17 01:20:05.275234 containerd[1503]: time="2026-01-17T01:20:05.273420787Z" level=info msg="TearDown network for sandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\" successfully" Jan 17 01:20:05.280894 containerd[1503]: time="2026-01-17T01:20:05.280853192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:20:05.281077 containerd[1503]: time="2026-01-17T01:20:05.281047612Z" level=info msg="RemovePodSandbox \"2afa1a15a6d1ea6693a4654fc440a7348475981f8c6bc400bc47594c596ee134\" returns successfully" Jan 17 01:20:06.129984 containerd[1503]: time="2026-01-17T01:20:06.129496907Z" level=info msg="StopPodSandbox for \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\"" Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.216 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.216 [INFO][5399] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" iface="eth0" netns="/var/run/netns/cni-d530a57c-dc6e-7f4d-bba8-37fbfc2b86f4" Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.216 [INFO][5399] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" iface="eth0" netns="/var/run/netns/cni-d530a57c-dc6e-7f4d-bba8-37fbfc2b86f4" Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.217 [INFO][5399] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" iface="eth0" netns="/var/run/netns/cni-d530a57c-dc6e-7f4d-bba8-37fbfc2b86f4" Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.217 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.217 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.255 [INFO][5406] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" HandleID="k8s-pod-network.4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.256 [INFO][5406] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.256 [INFO][5406] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.272 [WARNING][5406] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" HandleID="k8s-pod-network.4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.272 [INFO][5406] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" HandleID="k8s-pod-network.4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.275 [INFO][5406] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:06.282654 containerd[1503]: 2026-01-17 01:20:06.279 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:20:06.282654 containerd[1503]: time="2026-01-17T01:20:06.282243665Z" level=info msg="TearDown network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\" successfully" Jan 17 01:20:06.282654 containerd[1503]: time="2026-01-17T01:20:06.282315582Z" level=info msg="StopPodSandbox for \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\" returns successfully" Jan 17 01:20:06.288943 systemd[1]: run-netns-cni\x2dd530a57c\x2ddc6e\x2d7f4d\x2dbba8\x2d37fbfc2b86f4.mount: Deactivated successfully. Jan 17 01:20:06.304895 containerd[1503]: time="2026-01-17T01:20:06.304818218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x4jzn,Uid:26acde11-c245-44e9-abdd-2ebc74cfcad2,Namespace:kube-system,Attempt:1,}" Jan 17 01:20:06.512100 systemd-networkd[1426]: calif5be1c68bc0: Link UP Jan 17 01:20:06.515122 systemd-networkd[1426]: calif5be1c68bc0: Gained carrier Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.399 [INFO][5412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0 coredns-66bc5c9577- kube-system 26acde11-c245-44e9-abdd-2ebc74cfcad2 1131 0 2026-01-17 01:19:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-3374x.gb1.brightbox.com coredns-66bc5c9577-x4jzn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif5be1c68bc0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Namespace="kube-system" Pod="coredns-66bc5c9577-x4jzn" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.399 [INFO][5412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Namespace="kube-system" Pod="coredns-66bc5c9577-x4jzn" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.438 [INFO][5424] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" HandleID="k8s-pod-network.5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.438 [INFO][5424] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" HandleID="k8s-pod-network.5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d3a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-3374x.gb1.brightbox.com", "pod":"coredns-66bc5c9577-x4jzn", "timestamp":"2026-01-17 01:20:06.438503396 +0000 UTC"}, Hostname:"srv-3374x.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.438 [INFO][5424] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.438 [INFO][5424] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.438 [INFO][5424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3374x.gb1.brightbox.com' Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.450 [INFO][5424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.463 [INFO][5424] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.473 [INFO][5424] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.475 [INFO][5424] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.479 [INFO][5424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.480 [INFO][5424] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.482 [INFO][5424] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6 Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.487 [INFO][5424] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.498 [INFO][5424] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.2.137/26] block=192.168.2.128/26 handle="k8s-pod-network.5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.498 [INFO][5424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.137/26] handle="k8s-pod-network.5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" host="srv-3374x.gb1.brightbox.com" Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.498 [INFO][5424] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:20:06.548383 containerd[1503]: 2026-01-17 01:20:06.498 [INFO][5424] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.2.137/26] IPv6=[] ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" HandleID="k8s-pod-network.5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:20:06.551615 containerd[1503]: 2026-01-17 01:20:06.501 [INFO][5412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Namespace="kube-system" Pod="coredns-66bc5c9577-x4jzn" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"26acde11-c245-44e9-abdd-2ebc74cfcad2", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-x4jzn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5be1c68bc0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:06.551615 containerd[1503]: 2026-01-17 01:20:06.502 [INFO][5412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.137/32] ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Namespace="kube-system" Pod="coredns-66bc5c9577-x4jzn" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:20:06.551615 containerd[1503]: 2026-01-17 01:20:06.502 [INFO][5412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5be1c68bc0 ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Namespace="kube-system" Pod="coredns-66bc5c9577-x4jzn" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:20:06.551615 containerd[1503]: 2026-01-17 01:20:06.514 [INFO][5412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Namespace="kube-system" Pod="coredns-66bc5c9577-x4jzn" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:20:06.551615 containerd[1503]: 2026-01-17 01:20:06.516 [INFO][5412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Namespace="kube-system" Pod="coredns-66bc5c9577-x4jzn" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"26acde11-c245-44e9-abdd-2ebc74cfcad2", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6", Pod:"coredns-66bc5c9577-x4jzn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5be1c68bc0", MAC:"92:df:d9:34:65:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:20:06.551959 containerd[1503]: 2026-01-17 01:20:06.543 [INFO][5412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6" Namespace="kube-system" Pod="coredns-66bc5c9577-x4jzn" WorkloadEndpoint="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:20:06.601571 containerd[1503]: time="2026-01-17T01:20:06.601040220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:20:06.601571 containerd[1503]: time="2026-01-17T01:20:06.601145693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:20:06.601571 containerd[1503]: time="2026-01-17T01:20:06.601170072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:06.601571 containerd[1503]: time="2026-01-17T01:20:06.601313025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:20:06.651639 systemd[1]: Started cri-containerd-5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6.scope - libcontainer container 5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6. Jan 17 01:20:06.726175 containerd[1503]: time="2026-01-17T01:20:06.726121630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x4jzn,Uid:26acde11-c245-44e9-abdd-2ebc74cfcad2,Namespace:kube-system,Attempt:1,} returns sandbox id \"5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6\"" Jan 17 01:20:06.735857 containerd[1503]: time="2026-01-17T01:20:06.735796873Z" level=info msg="CreateContainer within sandbox \"5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 01:20:06.748740 containerd[1503]: time="2026-01-17T01:20:06.748585969Z" level=info msg="CreateContainer within sandbox \"5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5fd652f945681a02da1f741dff035f42ce25cd144dd11f83f4cf5daba905bbbc\"" Jan 17 01:20:06.750720 containerd[1503]: time="2026-01-17T01:20:06.750658236Z" level=info msg="StartContainer for \"5fd652f945681a02da1f741dff035f42ce25cd144dd11f83f4cf5daba905bbbc\"" Jan 17 01:20:06.792686 systemd[1]: Started cri-containerd-5fd652f945681a02da1f741dff035f42ce25cd144dd11f83f4cf5daba905bbbc.scope - libcontainer container 5fd652f945681a02da1f741dff035f42ce25cd144dd11f83f4cf5daba905bbbc. Jan 17 01:20:06.846091 containerd[1503]: time="2026-01-17T01:20:06.846018485Z" level=info msg="StartContainer for \"5fd652f945681a02da1f741dff035f42ce25cd144dd11f83f4cf5daba905bbbc\" returns successfully" Jan 17 01:20:07.823710 kubelet[2689]: I0117 01:20:07.822940 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x4jzn" podStartSLOduration=59.822910619 podStartE2EDuration="59.822910619s" podCreationTimestamp="2026-01-17 01:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:20:07.821915358 +0000 UTC m=+65.032415257" watchObservedRunningTime="2026-01-17 01:20:07.822910619 +0000 UTC m=+65.033410532" Jan 17 01:20:07.951548 systemd-networkd[1426]: calif5be1c68bc0: Gained IPv6LL Jan 17 01:20:12.129290 containerd[1503]: time="2026-01-17T01:20:12.129199513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 01:20:12.445454 containerd[1503]: time="2026-01-17T01:20:12.445029527Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:12.447345 containerd[1503]: time="2026-01-17T01:20:12.447167380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 01:20:12.447345 containerd[1503]: time="2026-01-17T01:20:12.447250413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 01:20:12.447576 kubelet[2689]: E0117 01:20:12.447516 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:20:12.448001 kubelet[2689]: E0117 01:20:12.447601 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:20:12.448001 kubelet[2689]: E0117 01:20:12.447711 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-56bdf7c7c8-4pst2_calico-system(87f78f88-02a9-4300-bfd6-d78b47321ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:12.450337 containerd[1503]: time="2026-01-17T01:20:12.450276862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 01:20:12.768962 containerd[1503]: time="2026-01-17T01:20:12.768740317Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:12.769895 containerd[1503]: time="2026-01-17T01:20:12.769842761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 01:20:12.770205 containerd[1503]: time="2026-01-17T01:20:12.769956012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 01:20:12.770350 kubelet[2689]: E0117 01:20:12.770256 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:20:12.770437 kubelet[2689]: E0117 01:20:12.770368 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:20:12.770512 kubelet[2689]: E0117 01:20:12.770484 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-56bdf7c7c8-4pst2_calico-system(87f78f88-02a9-4300-bfd6-d78b47321ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:12.770581 kubelet[2689]: E0117 01:20:12.770547 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56bdf7c7c8-4pst2" podUID="87f78f88-02a9-4300-bfd6-d78b47321ed8" Jan 17 01:20:14.130404 containerd[1503]: time="2026-01-17T01:20:14.130185604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 01:20:14.440088 containerd[1503]: time="2026-01-17T01:20:14.439696085Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:14.441931 containerd[1503]: time="2026-01-17T01:20:14.441865058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 01:20:14.442098 containerd[1503]: time="2026-01-17T01:20:14.441980813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 01:20:14.442650 kubelet[2689]: E0117 01:20:14.442362 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:20:14.442650 kubelet[2689]: E0117 01:20:14.442434 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:20:14.443832 kubelet[2689]: E0117 01:20:14.442653 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-796c9cbbf8-tx6d5_calico-system(722a4a78-bbcc-4e35-a380-cd81c1aedcd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:14.443832 kubelet[2689]: E0117 01:20:14.442710 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" podUID="722a4a78-bbcc-4e35-a380-cd81c1aedcd6" Jan 17 01:20:14.444230 containerd[1503]: time="2026-01-17T01:20:14.443419496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:20:14.756364 containerd[1503]: time="2026-01-17T01:20:14.755820511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:14.757549 containerd[1503]: time="2026-01-17T01:20:14.757388003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:20:14.757549 containerd[1503]: time="2026-01-17T01:20:14.757456420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:14.757886 kubelet[2689]: E0117 01:20:14.757800 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:14.759019 kubelet[2689]: E0117 01:20:14.757896 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:14.759019 kubelet[2689]: E0117 01:20:14.758042 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b97d9fcf7-9dv76_calico-apiserver(94302249-2c45-4fa9-a8ed-6356d7141062): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:14.759019 kubelet[2689]: E0117 01:20:14.758104 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" podUID="94302249-2c45-4fa9-a8ed-6356d7141062" Jan 17 01:20:16.129349 containerd[1503]: time="2026-01-17T01:20:16.128994435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 01:20:16.442091 containerd[1503]: time="2026-01-17T01:20:16.441875804Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:16.443912 containerd[1503]: time="2026-01-17T01:20:16.443834929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 01:20:16.444084 containerd[1503]: time="2026-01-17T01:20:16.444007811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 01:20:16.444761 kubelet[2689]: E0117 01:20:16.444346 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:20:16.444761 kubelet[2689]: E0117 01:20:16.444429 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:20:16.444761 kubelet[2689]: E0117 01:20:16.444550 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zj4mv_calico-system(569a36a0-46e6-4752-8b8f-005d85b2712f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:16.446104 containerd[1503]: time="2026-01-17T01:20:16.446061825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 01:20:16.756467 containerd[1503]: time="2026-01-17T01:20:16.756188349Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:16.757627 containerd[1503]: time="2026-01-17T01:20:16.757572212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 01:20:16.757748 containerd[1503]: time="2026-01-17T01:20:16.757688149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 01:20:16.757973 kubelet[2689]: E0117 01:20:16.757916 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:20:16.758101 kubelet[2689]: E0117 01:20:16.758006 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:20:16.758188 kubelet[2689]: E0117 01:20:16.758116 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zj4mv_calico-system(569a36a0-46e6-4752-8b8f-005d85b2712f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:16.758397 kubelet[2689]: E0117 01:20:16.758180 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:20:17.131928 containerd[1503]: time="2026-01-17T01:20:17.131631598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:20:17.443601 containerd[1503]: time="2026-01-17T01:20:17.442994651Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:17.444734 containerd[1503]: time="2026-01-17T01:20:17.444585903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:20:17.444734 containerd[1503]: time="2026-01-17T01:20:17.444631007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:17.445029 kubelet[2689]: E0117 01:20:17.444959 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:17.445974 kubelet[2689]: E0117 01:20:17.445063 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:17.447089 kubelet[2689]: E0117 01:20:17.446636 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559fb66ff-gwd8z_calico-apiserver(3996ebc6-9eb5-4686-84b4-4c62f64f0ca5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:17.447089 kubelet[2689]: E0117 01:20:17.446719 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" podUID="3996ebc6-9eb5-4686-84b4-4c62f64f0ca5" Jan 17 01:20:17.447942 containerd[1503]: time="2026-01-17T01:20:17.446768132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 01:20:17.764020 containerd[1503]: time="2026-01-17T01:20:17.763529194Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:17.765429 containerd[1503]: time="2026-01-17T01:20:17.765383116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 01:20:17.765730 containerd[1503]: time="2026-01-17T01:20:17.765469141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:17.766175 kubelet[2689]: E0117 01:20:17.766046 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:20:17.766175 kubelet[2689]: E0117 01:20:17.766150 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:20:17.766481 kubelet[2689]: E0117 01:20:17.766318 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-trqpb_calico-system(03cd3dbd-5e1b-4532-9c89-eb080f7c53df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:17.766481 kubelet[2689]: E0117 01:20:17.766388 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df" Jan 17 01:20:20.129074 containerd[1503]: time="2026-01-17T01:20:20.129000407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:20:20.450310 containerd[1503]: time="2026-01-17T01:20:20.449823250Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:20.451489 containerd[1503]: time="2026-01-17T01:20:20.451347676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:20:20.451489 containerd[1503]: time="2026-01-17T01:20:20.451436812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:20.451754 kubelet[2689]: E0117 01:20:20.451666 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:20.452329 kubelet[2689]: E0117 01:20:20.451756 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:20.452329 kubelet[2689]: E0117 01:20:20.451866 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b97d9fcf7-cbr65_calico-apiserver(e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:20.452329 kubelet[2689]: E0117 01:20:20.451938 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:20:23.142450 systemd[1]: Started sshd@9-10.244.8.82:22-20.161.92.111:44462.service - OpenSSH per-connection server daemon (20.161.92.111:44462). Jan 17 01:20:23.766941 sshd[5555]: Accepted publickey for core from 20.161.92.111 port 44462 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:20:23.773137 sshd[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:20:23.785927 systemd-logind[1484]: New session 12 of user core. Jan 17 01:20:23.793513 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 01:20:24.130785 kubelet[2689]: E0117 01:20:24.130603 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56bdf7c7c8-4pst2" podUID="87f78f88-02a9-4300-bfd6-d78b47321ed8" Jan 17 01:20:24.901988 sshd[5555]: pam_unix(sshd:session): session closed for user core Jan 17 01:20:24.911803 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Jan 17 01:20:24.913687 systemd[1]: sshd@9-10.244.8.82:22-20.161.92.111:44462.service: Deactivated successfully. Jan 17 01:20:24.918931 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 01:20:24.921558 systemd-logind[1484]: Removed session 12. Jan 17 01:20:26.130099 kubelet[2689]: E0117 01:20:26.129550 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" podUID="722a4a78-bbcc-4e35-a380-cd81c1aedcd6" Jan 17 01:20:29.130183 kubelet[2689]: E0117 01:20:29.129957 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" podUID="94302249-2c45-4fa9-a8ed-6356d7141062" Jan 17 01:20:29.130183 kubelet[2689]: E0117 01:20:29.130105 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:20:30.010968 systemd[1]: Started sshd@10-10.244.8.82:22-20.161.92.111:44466.service - OpenSSH per-connection server daemon (20.161.92.111:44466). Jan 17 01:20:30.670436 sshd[5595]: Accepted publickey for core from 20.161.92.111 port 44466 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:20:30.675479 sshd[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:20:30.687723 systemd-logind[1484]: New session 13 of user core. Jan 17 01:20:30.696593 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 01:20:31.133138 kubelet[2689]: E0117 01:20:31.133075 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" podUID="3996ebc6-9eb5-4686-84b4-4c62f64f0ca5" Jan 17 01:20:31.376402 sshd[5595]: pam_unix(sshd:session): session closed for user core Jan 17 01:20:31.380335 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Jan 17 01:20:31.380820 systemd[1]: sshd@10-10.244.8.82:22-20.161.92.111:44466.service: Deactivated successfully. Jan 17 01:20:31.383565 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 01:20:31.386773 systemd-logind[1484]: Removed session 13. Jan 17 01:20:32.128997 kubelet[2689]: E0117 01:20:32.128899 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:20:32.130481 kubelet[2689]: E0117 01:20:32.130431 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df" Jan 17 01:20:36.483716 systemd[1]: Started sshd@11-10.244.8.82:22-20.161.92.111:37352.service - OpenSSH per-connection server daemon (20.161.92.111:37352). Jan 17 01:20:37.058026 sshd[5614]: Accepted publickey for core from 20.161.92.111 port 37352 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:20:37.060781 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:20:37.068541 systemd-logind[1484]: New session 14 of user core. Jan 17 01:20:37.072522 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 01:20:37.135045 containerd[1503]: time="2026-01-17T01:20:37.134727638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 01:20:37.448300 containerd[1503]: time="2026-01-17T01:20:37.448137278Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:37.451899 containerd[1503]: time="2026-01-17T01:20:37.450496241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 01:20:37.451899 containerd[1503]: time="2026-01-17T01:20:37.450661139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 01:20:37.452119 kubelet[2689]: E0117 01:20:37.451179 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:20:37.452119 kubelet[2689]: E0117 01:20:37.451340 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:20:37.460097 kubelet[2689]: E0117 01:20:37.459537 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-56bdf7c7c8-4pst2_calico-system(87f78f88-02a9-4300-bfd6-d78b47321ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:37.462422 containerd[1503]: time="2026-01-17T01:20:37.462375375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 01:20:37.621032 sshd[5614]: pam_unix(sshd:session): session closed for user core Jan 17 01:20:37.628470 systemd[1]: sshd@11-10.244.8.82:22-20.161.92.111:37352.service: Deactivated successfully. Jan 17 01:20:37.631969 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 01:20:37.633385 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Jan 17 01:20:37.635318 systemd-logind[1484]: Removed session 14. Jan 17 01:20:37.728078 systemd[1]: Started sshd@12-10.244.8.82:22-20.161.92.111:37362.service - OpenSSH per-connection server daemon (20.161.92.111:37362). Jan 17 01:20:37.775418 containerd[1503]: time="2026-01-17T01:20:37.773634830Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:37.799177 containerd[1503]: time="2026-01-17T01:20:37.798670436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 01:20:37.799177 containerd[1503]: time="2026-01-17T01:20:37.798703242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 01:20:37.799495 kubelet[2689]: E0117 01:20:37.799238 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:20:37.799495 kubelet[2689]: E0117 01:20:37.799347 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:20:37.801613 kubelet[2689]: E0117 01:20:37.801464 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-56bdf7c7c8-4pst2_calico-system(87f78f88-02a9-4300-bfd6-d78b47321ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:37.801948 kubelet[2689]: E0117 01:20:37.801852 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56bdf7c7c8-4pst2" podUID="87f78f88-02a9-4300-bfd6-d78b47321ed8" Jan 17 01:20:38.311319 sshd[5628]: Accepted publickey for core from 20.161.92.111 port 37362 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:20:38.314015 sshd[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:20:38.322973 systemd-logind[1484]: New session 15 of user core. Jan 17 01:20:38.333952 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 01:20:38.958592 sshd[5628]: pam_unix(sshd:session): session closed for user core Jan 17 01:20:38.966084 systemd[1]: sshd@12-10.244.8.82:22-20.161.92.111:37362.service: Deactivated successfully. Jan 17 01:20:38.969962 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 01:20:38.972157 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Jan 17 01:20:38.974653 systemd-logind[1484]: Removed session 15. Jan 17 01:20:39.071748 systemd[1]: Started sshd@13-10.244.8.82:22-20.161.92.111:37364.service - OpenSSH per-connection server daemon (20.161.92.111:37364). Jan 17 01:20:39.670502 sshd[5639]: Accepted publickey for core from 20.161.92.111 port 37364 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:20:39.672665 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:20:39.680610 systemd-logind[1484]: New session 16 of user core. Jan 17 01:20:39.689565 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 01:20:40.185642 sshd[5639]: pam_unix(sshd:session): session closed for user core Jan 17 01:20:40.195496 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Jan 17 01:20:40.195877 systemd[1]: sshd@13-10.244.8.82:22-20.161.92.111:37364.service: Deactivated successfully. Jan 17 01:20:40.198921 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 01:20:40.200371 systemd-logind[1484]: Removed session 16. Jan 17 01:20:41.130334 containerd[1503]: time="2026-01-17T01:20:41.129919303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 01:20:41.476863 containerd[1503]: time="2026-01-17T01:20:41.476459698Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:41.478240 containerd[1503]: time="2026-01-17T01:20:41.478078278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 01:20:41.478398 containerd[1503]: time="2026-01-17T01:20:41.478218039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 01:20:41.478596 kubelet[2689]: E0117 01:20:41.478526 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:20:41.479334 kubelet[2689]: E0117 01:20:41.478606 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:20:41.479334 kubelet[2689]: E0117 01:20:41.478868 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-796c9cbbf8-tx6d5_calico-system(722a4a78-bbcc-4e35-a380-cd81c1aedcd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:41.479334 kubelet[2689]: E0117 01:20:41.478921 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" podUID="722a4a78-bbcc-4e35-a380-cd81c1aedcd6" Jan 17 01:20:41.480668 containerd[1503]: time="2026-01-17T01:20:41.479815667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 01:20:41.790832 containerd[1503]: time="2026-01-17T01:20:41.790598791Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:41.791963 containerd[1503]: time="2026-01-17T01:20:41.791918306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 01:20:41.792752 containerd[1503]: time="2026-01-17T01:20:41.792031355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 01:20:41.792844 kubelet[2689]: E0117 01:20:41.792330 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:20:41.792844 kubelet[2689]: E0117 01:20:41.792396 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 01:20:41.792844 kubelet[2689]: E0117 01:20:41.792515 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zj4mv_calico-system(569a36a0-46e6-4752-8b8f-005d85b2712f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:41.793983 containerd[1503]: time="2026-01-17T01:20:41.793951288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 01:20:42.102529 containerd[1503]: time="2026-01-17T01:20:42.102432752Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:42.104074 containerd[1503]: time="2026-01-17T01:20:42.103949983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 01:20:42.104074 containerd[1503]: time="2026-01-17T01:20:42.103993451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 01:20:42.104465 kubelet[2689]: E0117 01:20:42.104387 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:20:42.104580 kubelet[2689]: E0117 01:20:42.104490 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 01:20:42.104660 kubelet[2689]: E0117 01:20:42.104627 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zj4mv_calico-system(569a36a0-46e6-4752-8b8f-005d85b2712f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:42.104805 kubelet[2689]: E0117 01:20:42.104717 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:20:43.132589 containerd[1503]: time="2026-01-17T01:20:43.132534847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:20:43.445401 containerd[1503]: time="2026-01-17T01:20:43.445207866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:43.447471 containerd[1503]: time="2026-01-17T01:20:43.447397727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:20:43.447564 containerd[1503]: time="2026-01-17T01:20:43.447514789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:43.447844 kubelet[2689]: E0117 01:20:43.447796 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:43.449096 kubelet[2689]: E0117 01:20:43.447861 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:43.449096 kubelet[2689]: E0117 01:20:43.447975 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b97d9fcf7-9dv76_calico-apiserver(94302249-2c45-4fa9-a8ed-6356d7141062): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:43.449096 kubelet[2689]: E0117 01:20:43.448040 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" podUID="94302249-2c45-4fa9-a8ed-6356d7141062" Jan 17 01:20:44.133227 containerd[1503]: time="2026-01-17T01:20:44.132235524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 01:20:44.456598 containerd[1503]: time="2026-01-17T01:20:44.456427287Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:44.457883 containerd[1503]: time="2026-01-17T01:20:44.457834425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 01:20:44.458058 containerd[1503]: time="2026-01-17T01:20:44.457845881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:44.458209 kubelet[2689]: E0117 01:20:44.458143 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:20:44.458686 kubelet[2689]: E0117 01:20:44.458223 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:20:44.458686 kubelet[2689]: E0117 01:20:44.458368 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-trqpb_calico-system(03cd3dbd-5e1b-4532-9c89-eb080f7c53df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:44.458686 kubelet[2689]: E0117 01:20:44.458417 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df" Jan 17 01:20:45.294692 systemd[1]: Started sshd@14-10.244.8.82:22-20.161.92.111:60166.service - OpenSSH per-connection server daemon (20.161.92.111:60166). Jan 17 01:20:45.859847 sshd[5660]: Accepted publickey for core from 20.161.92.111 port 60166 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:20:45.860818 sshd[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:20:45.868075 systemd-logind[1484]: New session 17 of user core. Jan 17 01:20:45.876489 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 01:20:46.130678 containerd[1503]: time="2026-01-17T01:20:46.130514532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:20:46.366380 sshd[5660]: pam_unix(sshd:session): session closed for user core Jan 17 01:20:46.370755 systemd[1]: sshd@14-10.244.8.82:22-20.161.92.111:60166.service: Deactivated successfully. Jan 17 01:20:46.374084 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 01:20:46.376992 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Jan 17 01:20:46.378679 systemd-logind[1484]: Removed session 17. Jan 17 01:20:46.444394 containerd[1503]: time="2026-01-17T01:20:46.444190105Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:46.445769 containerd[1503]: time="2026-01-17T01:20:46.445706341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:20:46.445882 containerd[1503]: time="2026-01-17T01:20:46.445834735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:46.446285 kubelet[2689]: E0117 01:20:46.446195 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:46.446761 kubelet[2689]: E0117 01:20:46.446304 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:46.446761 kubelet[2689]: E0117 01:20:46.446428 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8559fb66ff-gwd8z_calico-apiserver(3996ebc6-9eb5-4686-84b4-4c62f64f0ca5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:46.446761 kubelet[2689]: E0117 01:20:46.446482 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" podUID="3996ebc6-9eb5-4686-84b4-4c62f64f0ca5" Jan 17 01:20:47.130883 containerd[1503]: time="2026-01-17T01:20:47.130560803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 01:20:47.463658 containerd[1503]: time="2026-01-17T01:20:47.463411138Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:20:47.465046 containerd[1503]: time="2026-01-17T01:20:47.464992298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 01:20:47.465184 containerd[1503]: time="2026-01-17T01:20:47.465135286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 01:20:47.465420 kubelet[2689]: E0117 01:20:47.465358 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:47.465942 kubelet[2689]: E0117 01:20:47.465437 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 01:20:47.465942 kubelet[2689]: E0117 01:20:47.465556 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b97d9fcf7-cbr65_calico-apiserver(e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 01:20:47.465942 kubelet[2689]: E0117 01:20:47.465613 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:20:51.470654 systemd[1]: Started sshd@15-10.244.8.82:22-20.161.92.111:60180.service - OpenSSH per-connection server daemon (20.161.92.111:60180). Jan 17 01:20:52.036308 sshd[5680]: Accepted publickey for core from 20.161.92.111 port 60180 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:20:52.040826 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:20:52.049463 systemd-logind[1484]: New session 18 of user core. Jan 17 01:20:52.060781 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 01:20:52.131831 kubelet[2689]: E0117 01:20:52.131615 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56bdf7c7c8-4pst2" podUID="87f78f88-02a9-4300-bfd6-d78b47321ed8" Jan 17 01:20:52.546500 sshd[5680]: pam_unix(sshd:session): session closed for user core Jan 17 01:20:52.552699 systemd[1]: sshd@15-10.244.8.82:22-20.161.92.111:60180.service: Deactivated successfully. Jan 17 01:20:52.555814 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 01:20:52.557679 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Jan 17 01:20:52.559103 systemd-logind[1484]: Removed session 18. Jan 17 01:20:55.130780 kubelet[2689]: E0117 01:20:55.130073 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" podUID="722a4a78-bbcc-4e35-a380-cd81c1aedcd6" Jan 17 01:20:56.129301 kubelet[2689]: E0117 01:20:56.128652 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" podUID="94302249-2c45-4fa9-a8ed-6356d7141062" Jan 17 01:20:57.130052 kubelet[2689]: E0117 01:20:57.129928 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:20:57.650627 systemd[1]: Started sshd@16-10.244.8.82:22-20.161.92.111:33716.service - OpenSSH per-connection server daemon (20.161.92.111:33716). Jan 17 01:20:58.129553 kubelet[2689]: E0117 01:20:58.129048 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:20:58.129553 kubelet[2689]: E0117 01:20:58.129250 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df" Jan 17 01:20:58.216810 sshd[5692]: Accepted publickey for core from 20.161.92.111 port 33716 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:20:58.219173 sshd[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:20:58.227598 systemd-logind[1484]: New session 19 of user core. Jan 17 01:20:58.232462 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 01:20:58.718709 sshd[5692]: pam_unix(sshd:session): session closed for user core Jan 17 01:20:58.725727 systemd[1]: sshd@16-10.244.8.82:22-20.161.92.111:33716.service: Deactivated successfully. Jan 17 01:20:58.729695 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 01:20:58.734182 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Jan 17 01:20:58.735902 systemd-logind[1484]: Removed session 19. Jan 17 01:20:58.825687 systemd[1]: Started sshd@17-10.244.8.82:22-20.161.92.111:33728.service - OpenSSH per-connection server daemon (20.161.92.111:33728). Jan 17 01:20:59.355391 systemd[1]: run-containerd-runc-k8s.io-e0138372493699c832102689e70bd12d2d10486c18b5f4a4cf531a01ec144c76-runc.6q1u1Q.mount: Deactivated successfully. Jan 17 01:20:59.390886 sshd[5705]: Accepted publickey for core from 20.161.92.111 port 33728 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:20:59.395755 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:20:59.405657 systemd-logind[1484]: New session 20 of user core. Jan 17 01:20:59.413515 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 01:21:00.130585 kubelet[2689]: E0117 01:21:00.129217 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" podUID="3996ebc6-9eb5-4686-84b4-4c62f64f0ca5" Jan 17 01:21:00.185221 sshd[5705]: pam_unix(sshd:session): session closed for user core Jan 17 01:21:00.198577 systemd[1]: sshd@17-10.244.8.82:22-20.161.92.111:33728.service: Deactivated successfully. Jan 17 01:21:00.202629 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 01:21:00.206156 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Jan 17 01:21:00.210538 systemd-logind[1484]: Removed session 20. Jan 17 01:21:00.293859 systemd[1]: Started sshd@18-10.244.8.82:22-20.161.92.111:33736.service - OpenSSH per-connection server daemon (20.161.92.111:33736). Jan 17 01:21:00.888050 sshd[5737]: Accepted publickey for core from 20.161.92.111 port 33736 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:21:00.890832 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:21:00.900790 systemd-logind[1484]: New session 21 of user core. Jan 17 01:21:00.910193 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 01:21:02.254216 sshd[5737]: pam_unix(sshd:session): session closed for user core Jan 17 01:21:02.261314 systemd[1]: sshd@18-10.244.8.82:22-20.161.92.111:33736.service: Deactivated successfully. Jan 17 01:21:02.264643 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 01:21:02.267815 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Jan 17 01:21:02.269690 systemd-logind[1484]: Removed session 21. Jan 17 01:21:02.357658 systemd[1]: Started sshd@19-10.244.8.82:22-20.161.92.111:33738.service - OpenSSH per-connection server daemon (20.161.92.111:33738). Jan 17 01:21:03.527912 sshd[5754]: Accepted publickey for core from 20.161.92.111 port 33738 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:21:03.530625 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:21:03.537923 systemd-logind[1484]: New session 22 of user core. Jan 17 01:21:03.543492 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 01:21:04.421827 sshd[5754]: pam_unix(sshd:session): session closed for user core Jan 17 01:21:04.426822 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Jan 17 01:21:04.427501 systemd[1]: sshd@19-10.244.8.82:22-20.161.92.111:33738.service: Deactivated successfully. Jan 17 01:21:04.430064 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 01:21:04.431928 systemd-logind[1484]: Removed session 22. Jan 17 01:21:04.528665 systemd[1]: Started sshd@20-10.244.8.82:22-20.161.92.111:51314.service - OpenSSH per-connection server daemon (20.161.92.111:51314). Jan 17 01:21:05.291220 containerd[1503]: time="2026-01-17T01:21:05.291168886Z" level=info msg="StopPodSandbox for \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\"" Jan 17 01:21:05.410910 sshd[5770]: Accepted publickey for core from 20.161.92.111 port 51314 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:21:05.414325 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:21:05.432641 systemd-logind[1484]: New session 23 of user core. Jan 17 01:21:05.439276 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.392 [WARNING][5780] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"26acde11-c245-44e9-abdd-2ebc74cfcad2", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6", Pod:"coredns-66bc5c9577-x4jzn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5be1c68bc0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.393 [INFO][5780] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.393 [INFO][5780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" iface="eth0" netns="" Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.393 [INFO][5780] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.393 [INFO][5780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.452 [INFO][5787] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" HandleID="k8s-pod-network.4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.452 [INFO][5787] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.452 [INFO][5787] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.464 [WARNING][5787] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" HandleID="k8s-pod-network.4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.464 [INFO][5787] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" HandleID="k8s-pod-network.4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.466 [INFO][5787] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:21:05.472585 containerd[1503]: 2026-01-17 01:21:05.469 [INFO][5780] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:21:05.473558 containerd[1503]: time="2026-01-17T01:21:05.472676370Z" level=info msg="TearDown network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\" successfully" Jan 17 01:21:05.473558 containerd[1503]: time="2026-01-17T01:21:05.472743390Z" level=info msg="StopPodSandbox for \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\" returns successfully" Jan 17 01:21:05.474372 containerd[1503]: time="2026-01-17T01:21:05.473889431Z" level=info msg="RemovePodSandbox for \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\"" Jan 17 01:21:05.474372 containerd[1503]: time="2026-01-17T01:21:05.473932313Z" level=info msg="Forcibly stopping sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\"" Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.525 [WARNING][5802] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"26acde11-c245-44e9-abdd-2ebc74cfcad2", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 1, 19, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3374x.gb1.brightbox.com", ContainerID:"5adddedbaa438503a7b43b213e682c5c7fbe7aee56e359aacae93bb2391248c6", Pod:"coredns-66bc5c9577-x4jzn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5be1c68bc0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.526 [INFO][5802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.526 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" iface="eth0" netns="" Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.526 [INFO][5802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.526 [INFO][5802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.557 [INFO][5809] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" HandleID="k8s-pod-network.4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.557 [INFO][5809] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.557 [INFO][5809] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.567 [WARNING][5809] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" HandleID="k8s-pod-network.4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.567 [INFO][5809] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" HandleID="k8s-pod-network.4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Workload="srv--3374x.gb1.brightbox.com-k8s-coredns--66bc5c9577--x4jzn-eth0" Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.569 [INFO][5809] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 01:21:05.574636 containerd[1503]: 2026-01-17 01:21:05.572 [INFO][5802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561" Jan 17 01:21:05.574636 containerd[1503]: time="2026-01-17T01:21:05.574477548Z" level=info msg="TearDown network for sandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\" successfully" Jan 17 01:21:05.585163 containerd[1503]: time="2026-01-17T01:21:05.585110829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 01:21:05.585281 containerd[1503]: time="2026-01-17T01:21:05.585184458Z" level=info msg="RemovePodSandbox \"4996435bfc97134e1d95ec14a147731697ded6f81de6e660097e1e76814c0561\" returns successfully" Jan 17 01:21:05.955207 sshd[5770]: pam_unix(sshd:session): session closed for user core Jan 17 01:21:05.962589 systemd[1]: sshd@20-10.244.8.82:22-20.161.92.111:51314.service: Deactivated successfully. Jan 17 01:21:05.965569 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 01:21:05.966905 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Jan 17 01:21:05.968666 systemd-logind[1484]: Removed session 23. Jan 17 01:21:07.137149 kubelet[2689]: E0117 01:21:07.136474 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" podUID="722a4a78-bbcc-4e35-a380-cd81c1aedcd6" Jan 17 01:21:07.154784 kubelet[2689]: E0117 01:21:07.154664 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56bdf7c7c8-4pst2" podUID="87f78f88-02a9-4300-bfd6-d78b47321ed8" Jan 17 01:21:09.131721 kubelet[2689]: E0117 01:21:09.131475 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:21:10.128254 kubelet[2689]: E0117 01:21:10.127937 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" podUID="94302249-2c45-4fa9-a8ed-6356d7141062" Jan 17 01:21:10.128254 kubelet[2689]: E0117 01:21:10.127957 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:21:11.058626 systemd[1]: Started sshd@21-10.244.8.82:22-20.161.92.111:51320.service - OpenSSH per-connection server daemon (20.161.92.111:51320). Jan 17 01:21:11.638476 sshd[5826]: Accepted publickey for core from 20.161.92.111 port 51320 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:21:11.641199 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:21:11.648578 systemd-logind[1484]: New session 24 of user core. Jan 17 01:21:11.655483 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 01:21:12.128918 kubelet[2689]: E0117 01:21:12.128824 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" podUID="3996ebc6-9eb5-4686-84b4-4c62f64f0ca5" Jan 17 01:21:12.130745 sshd[5826]: pam_unix(sshd:session): session closed for user core Jan 17 01:21:12.136975 systemd-logind[1484]: Session 24 logged out. Waiting for processes to exit. Jan 17 01:21:12.137861 systemd[1]: sshd@21-10.244.8.82:22-20.161.92.111:51320.service: Deactivated successfully. Jan 17 01:21:12.142572 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 01:21:12.148415 systemd-logind[1484]: Removed session 24. Jan 17 01:21:13.129543 kubelet[2689]: E0117 01:21:13.129234 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df" Jan 17 01:21:17.235641 systemd[1]: Started sshd@22-10.244.8.82:22-20.161.92.111:44068.service - OpenSSH per-connection server daemon (20.161.92.111:44068). Jan 17 01:21:17.817154 sshd[5841]: Accepted publickey for core from 20.161.92.111 port 44068 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:21:17.819320 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:21:17.827212 systemd-logind[1484]: New session 25 of user core. Jan 17 01:21:17.833773 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 01:21:18.359069 sshd[5841]: pam_unix(sshd:session): session closed for user core Jan 17 01:21:18.365290 systemd[1]: sshd@22-10.244.8.82:22-20.161.92.111:44068.service: Deactivated successfully. Jan 17 01:21:18.370214 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 01:21:18.371956 systemd-logind[1484]: Session 25 logged out. Waiting for processes to exit. Jan 17 01:21:18.373862 systemd-logind[1484]: Removed session 25. Jan 17 01:21:19.128891 containerd[1503]: time="2026-01-17T01:21:19.128481994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 01:21:19.452728 containerd[1503]: time="2026-01-17T01:21:19.452488538Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:21:19.455083 containerd[1503]: time="2026-01-17T01:21:19.454918285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 01:21:19.455083 containerd[1503]: time="2026-01-17T01:21:19.454980570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 01:21:19.455398 kubelet[2689]: E0117 01:21:19.455311 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:21:19.455947 kubelet[2689]: E0117 01:21:19.455405 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 01:21:19.455947 kubelet[2689]: E0117 01:21:19.455614 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-56bdf7c7c8-4pst2_calico-system(87f78f88-02a9-4300-bfd6-d78b47321ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 01:21:19.458846 containerd[1503]: time="2026-01-17T01:21:19.458537010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 01:21:19.767474 containerd[1503]: time="2026-01-17T01:21:19.767160660Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:21:19.768613 containerd[1503]: time="2026-01-17T01:21:19.768559653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 01:21:19.768752 containerd[1503]: time="2026-01-17T01:21:19.768677619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 01:21:19.769095 kubelet[2689]: E0117 01:21:19.768941 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:21:19.769095 kubelet[2689]: E0117 01:21:19.769009 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 01:21:19.769360 kubelet[2689]: E0117 01:21:19.769135 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-56bdf7c7c8-4pst2_calico-system(87f78f88-02a9-4300-bfd6-d78b47321ed8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 01:21:19.769360 kubelet[2689]: E0117 01:21:19.769200 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56bdf7c7c8-4pst2" podUID="87f78f88-02a9-4300-bfd6-d78b47321ed8" Jan 17 01:21:20.130089 kubelet[2689]: E0117 01:21:20.129920 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zj4mv" podUID="569a36a0-46e6-4752-8b8f-005d85b2712f" Jan 17 01:21:21.340903 update_engine[1485]: I20260117 01:21:21.340592 1485 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 01:21:21.340903 update_engine[1485]: I20260117 01:21:21.340765 1485 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 01:21:21.345566 update_engine[1485]: I20260117 01:21:21.345336 1485 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 01:21:21.347757 update_engine[1485]: I20260117 01:21:21.347575 1485 omaha_request_params.cc:62] Current group set to lts Jan 17 01:21:21.348510 update_engine[1485]: I20260117 01:21:21.348460 1485 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 01:21:21.350157 update_engine[1485]: I20260117 01:21:21.348614 1485 update_attempter.cc:643] Scheduling an action processor start. Jan 17 01:21:21.350157 update_engine[1485]: I20260117 01:21:21.348668 1485 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 01:21:21.350157 update_engine[1485]: I20260117 01:21:21.348755 1485 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 01:21:21.350157 update_engine[1485]: I20260117 01:21:21.348870 1485 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 01:21:21.350157 update_engine[1485]: I20260117 01:21:21.348937 1485 omaha_request_action.cc:272] Request: Jan 17 01:21:21.350157 update_engine[1485]: Jan 17 01:21:21.350157 update_engine[1485]: Jan 17 01:21:21.350157 update_engine[1485]: Jan 17 01:21:21.350157 update_engine[1485]: Jan 17 01:21:21.350157 update_engine[1485]: Jan 17 01:21:21.350157 update_engine[1485]: Jan 17 01:21:21.350157 update_engine[1485]: Jan 17 01:21:21.350157 update_engine[1485]: Jan 17 01:21:21.350157 update_engine[1485]: I20260117 01:21:21.348955 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 01:21:21.364390 update_engine[1485]: I20260117 01:21:21.362360 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 01:21:21.364390 update_engine[1485]: I20260117 01:21:21.362833 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 01:21:21.369457 update_engine[1485]: E20260117 01:21:21.369239 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 01:21:21.369457 update_engine[1485]: I20260117 01:21:21.369371 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 01:21:21.382656 locksmithd[1514]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 01:21:22.131569 kubelet[2689]: E0117 01:21:22.131497 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-9dv76" podUID="94302249-2c45-4fa9-a8ed-6356d7141062" Jan 17 01:21:22.134370 containerd[1503]: time="2026-01-17T01:21:22.134125218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 01:21:22.444064 containerd[1503]: time="2026-01-17T01:21:22.443612446Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:21:22.445150 containerd[1503]: time="2026-01-17T01:21:22.445104186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 01:21:22.445357 containerd[1503]: time="2026-01-17T01:21:22.445204814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 01:21:22.447255 kubelet[2689]: E0117 01:21:22.446499 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:21:22.447255 kubelet[2689]: E0117 01:21:22.446575 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 01:21:22.447255 kubelet[2689]: E0117 01:21:22.446693 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-796c9cbbf8-tx6d5_calico-system(722a4a78-bbcc-4e35-a380-cd81c1aedcd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 01:21:22.447255 kubelet[2689]: E0117 01:21:22.446767 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-796c9cbbf8-tx6d5" podUID="722a4a78-bbcc-4e35-a380-cd81c1aedcd6" Jan 17 01:21:23.146333 kubelet[2689]: E0117 01:21:23.146243 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b97d9fcf7-cbr65" podUID="e0fcbf88-1dad-4938-9ddc-ff2aaa8588a0" Jan 17 01:21:23.466726 systemd[1]: Started sshd@23-10.244.8.82:22-20.161.92.111:53830.service - OpenSSH per-connection server daemon (20.161.92.111:53830). Jan 17 01:21:24.096869 sshd[5861]: Accepted publickey for core from 20.161.92.111 port 53830 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:21:24.103502 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:21:24.113032 systemd-logind[1484]: New session 26 of user core. Jan 17 01:21:24.119607 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 01:21:25.018685 sshd[5861]: pam_unix(sshd:session): session closed for user core Jan 17 01:21:25.025236 systemd[1]: sshd@23-10.244.8.82:22-20.161.92.111:53830.service: Deactivated successfully. Jan 17 01:21:25.029216 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 01:21:25.033788 systemd-logind[1484]: Session 26 logged out. Waiting for processes to exit. Jan 17 01:21:25.037496 systemd-logind[1484]: Removed session 26. Jan 17 01:21:26.137878 kubelet[2689]: E0117 01:21:26.137788 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8559fb66ff-gwd8z" podUID="3996ebc6-9eb5-4686-84b4-4c62f64f0ca5" Jan 17 01:21:26.140634 containerd[1503]: time="2026-01-17T01:21:26.138228897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 01:21:26.478073 containerd[1503]: time="2026-01-17T01:21:26.477156762Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 01:21:26.480077 containerd[1503]: time="2026-01-17T01:21:26.479718217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 01:21:26.480077 containerd[1503]: time="2026-01-17T01:21:26.479734633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 01:21:26.481501 kubelet[2689]: E0117 01:21:26.480827 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:21:26.481501 kubelet[2689]: E0117 01:21:26.480911 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 01:21:26.481501 kubelet[2689]: E0117 01:21:26.481048 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-trqpb_calico-system(03cd3dbd-5e1b-4532-9c89-eb080f7c53df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 01:21:26.481501 kubelet[2689]: E0117 01:21:26.481099 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-trqpb" podUID="03cd3dbd-5e1b-4532-9c89-eb080f7c53df"