Jan 20 01:34:07.040613 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 01:34:07.040649 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 01:34:07.040663 kernel: BIOS-provided physical RAM map: Jan 20 01:34:07.040678 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 01:34:07.040688 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 01:34:07.040698 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 01:34:07.040710 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 20 01:34:07.040720 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 20 01:34:07.040731 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 01:34:07.040741 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 01:34:07.040751 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:34:07.040774 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 01:34:07.040792 kernel: NX (Execute Disable) protection: active Jan 20 01:34:07.040803 kernel: APIC: Static calls initialized Jan 20 01:34:07.040815 kernel: SMBIOS 2.8 present. Jan 20 01:34:07.040827 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 20 01:34:07.041841 kernel: Hypervisor detected: KVM Jan 20 01:34:07.041868 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 01:34:07.041881 kernel: kvm-clock: using sched offset of 4529210743 cycles Jan 20 01:34:07.041894 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:34:07.041906 kernel: tsc: Detected 2499.998 MHz processor Jan 20 01:34:07.041918 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 01:34:07.041930 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 01:34:07.041942 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 20 01:34:07.041953 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 01:34:07.041965 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 01:34:07.041982 kernel: Using GB pages for direct mapping Jan 20 01:34:07.041994 kernel: ACPI: Early table checksum verification disabled Jan 20 01:34:07.042005 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 20 01:34:07.042017 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:07.042029 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:07.042040 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:07.042051 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 20 01:34:07.042063 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:07.042074 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:07.042104 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:07.042119 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:34:07.042130 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 20 01:34:07.042142 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 20 01:34:07.042154 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 20 01:34:07.042173 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 20 01:34:07.042186 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 20 01:34:07.042203 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 20 01:34:07.042215 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 20 01:34:07.042227 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 20 01:34:07.042239 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 20 01:34:07.042251 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 20 01:34:07.042262 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 20 01:34:07.042274 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 20 01:34:07.042291 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 20 01:34:07.042303 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 20 01:34:07.042315 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 20 01:34:07.042327 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 20 01:34:07.042338 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 20 01:34:07.042350 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 20 01:34:07.042374 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 20 01:34:07.042386 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 20 01:34:07.042397 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 20 01:34:07.042409 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 20 01:34:07.042438 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 20 01:34:07.042450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 20 01:34:07.042462 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 20 01:34:07.042474 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 20 01:34:07.042486 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 20 01:34:07.042498 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 20 01:34:07.042510 kernel: Zone ranges: Jan 20 01:34:07.042522 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 01:34:07.042534 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 20 01:34:07.042551 kernel: Normal empty Jan 20 01:34:07.042564 kernel: Movable zone start for each node Jan 20 01:34:07.042576 kernel: Early memory node ranges Jan 20 01:34:07.042587 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 01:34:07.042599 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 20 01:34:07.042611 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 20 01:34:07.042623 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:34:07.042635 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 01:34:07.042647 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 20 01:34:07.042659 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 01:34:07.042676 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 01:34:07.042688 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 01:34:07.042700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 01:34:07.042712 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 01:34:07.042724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 01:34:07.042736 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 01:34:07.042748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 01:34:07.042775 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 01:34:07.042788 kernel: TSC deadline timer available Jan 20 01:34:07.042807 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 20 01:34:07.042819 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 01:34:07.042831 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 01:34:07.042843 kernel: Booting paravirtualized kernel on KVM Jan 20 01:34:07.042855 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 01:34:07.042867 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 20 01:34:07.042879 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 20 01:34:07.042891 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 20 01:34:07.042903 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 20 01:34:07.042920 kernel: kvm-guest: PV spinlocks enabled Jan 20 01:34:07.042933 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 01:34:07.042947 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 01:34:07.042959 kernel: random: crng init done Jan 20 01:34:07.042971 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:34:07.042983 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 20 01:34:07.042995 kernel: Fallback order for Node 0: 0 Jan 20 01:34:07.043007 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 20 01:34:07.043025 kernel: Policy zone: DMA32 Jan 20 01:34:07.043037 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:34:07.043049 kernel: software IO TLB: area num 16. Jan 20 01:34:07.043061 kernel: Memory: 1901596K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 194760K reserved, 0K cma-reserved) Jan 20 01:34:07.043073 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 20 01:34:07.043085 kernel: Kernel/User page tables isolation: enabled Jan 20 01:34:07.046234 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 01:34:07.046252 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 01:34:07.046264 kernel: Dynamic Preempt: voluntary Jan 20 01:34:07.046285 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:34:07.046303 kernel: rcu: RCU event tracing is enabled. Jan 20 01:34:07.046316 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 20 01:34:07.046329 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:34:07.046341 kernel: Rude variant of Tasks RCU enabled. Jan 20 01:34:07.046368 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:34:07.046382 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:34:07.046394 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 20 01:34:07.046407 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 20 01:34:07.046422 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:34:07.046435 kernel: Console: colour VGA+ 80x25 Jan 20 01:34:07.046447 kernel: printk: console [tty0] enabled Jan 20 01:34:07.046466 kernel: printk: console [ttyS0] enabled Jan 20 01:34:07.046479 kernel: ACPI: Core revision 20230628 Jan 20 01:34:07.046491 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 01:34:07.046503 kernel: x2apic enabled Jan 20 01:34:07.046517 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 01:34:07.046535 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 20 01:34:07.046548 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 20 01:34:07.046561 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 01:34:07.046573 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 20 01:34:07.046586 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 20 01:34:07.046601 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 01:34:07.046613 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 01:34:07.046994 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 01:34:07.047009 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 20 01:34:07.047021 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 20 01:34:07.047042 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 20 01:34:07.047055 kernel: MDS: Mitigation: Clear CPU buffers Jan 20 01:34:07.047067 kernel: MMIO Stale Data: Unknown: No mitigations Jan 20 01:34:07.047079 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 20 01:34:07.047107 kernel: active return thunk: its_return_thunk Jan 20 01:34:07.047121 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 20 01:34:07.047134 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 01:34:07.047146 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 01:34:07.047159 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 01:34:07.047172 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 01:34:07.047184 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 20 01:34:07.047204 kernel: Freeing SMP alternatives memory: 32K Jan 20 01:34:07.047217 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:34:07.047230 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 01:34:07.047242 kernel: landlock: Up and running. Jan 20 01:34:07.047255 kernel: SELinux: Initializing. Jan 20 01:34:07.047267 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 20 01:34:07.047280 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 20 01:34:07.047292 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 20 01:34:07.047305 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 20 01:34:07.047318 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 20 01:34:07.047337 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 20 01:34:07.047350 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 20 01:34:07.047363 kernel: signal: max sigframe size: 1776 Jan 20 01:34:07.047376 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:34:07.047389 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:34:07.047402 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 01:34:07.047414 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:34:07.047427 kernel: smpboot: x86: Booting SMP configuration: Jan 20 01:34:07.047440 kernel: .... node #0, CPUs: #1 Jan 20 01:34:07.047457 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 20 01:34:07.047470 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 01:34:07.047483 kernel: smpboot: Max logical packages: 16 Jan 20 01:34:07.047495 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 20 01:34:07.047508 kernel: devtmpfs: initialized Jan 20 01:34:07.047521 kernel: x86/mm: Memory block size: 128MB Jan 20 01:34:07.047533 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:34:07.047546 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 20 01:34:07.047559 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:34:07.047571 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:34:07.047590 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:34:07.047603 kernel: audit: type=2000 audit(1768872845.499:1): state=initialized audit_enabled=0 res=1 Jan 20 01:34:07.047615 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:34:07.047628 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 01:34:07.047641 kernel: cpuidle: using governor menu Jan 20 01:34:07.047653 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:34:07.047666 kernel: dca service started, version 1.12.1 Jan 20 01:34:07.047679 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 01:34:07.047698 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 01:34:07.047711 kernel: PCI: Using configuration type 1 for base access Jan 20 01:34:07.047724 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 01:34:07.047736 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:34:07.047749 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:34:07.047773 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:34:07.047786 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:34:07.047799 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:34:07.047811 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:34:07.047831 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:34:07.047844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:34:07.047857 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 01:34:07.047870 kernel: ACPI: Interpreter enabled Jan 20 01:34:07.047882 kernel: ACPI: PM: (supports S0 S5) Jan 20 01:34:07.047895 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 01:34:07.047908 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 01:34:07.047920 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 01:34:07.047933 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 01:34:07.047951 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 01:34:07.048273 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 01:34:07.048461 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 20 01:34:07.048632 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 20 01:34:07.048652 kernel: PCI host bridge to bus 0000:00 Jan 20 01:34:07.048866 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 01:34:07.049030 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 01:34:07.050417 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 01:34:07.050579 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 20 01:34:07.050732 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 01:34:07.050918 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 20 01:34:07.051078 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 01:34:07.053025 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 01:34:07.053272 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 20 01:34:07.053454 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 20 01:34:07.053635 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 20 01:34:07.053823 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 20 01:34:07.053991 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 01:34:07.055350 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 20 01:34:07.055531 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 20 01:34:07.055750 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 20 01:34:07.055939 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 20 01:34:07.056701 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 20 01:34:07.056907 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 20 01:34:07.057152 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 20 01:34:07.057328 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 20 01:34:07.057546 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 20 01:34:07.057717 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 20 01:34:07.057923 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 20 01:34:07.058112 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 20 01:34:07.058305 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 20 01:34:07.058480 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 20 01:34:07.058695 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 20 01:34:07.058888 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 20 01:34:07.061119 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 20 01:34:07.061313 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 20 01:34:07.061489 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 20 01:34:07.061675 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 20 01:34:07.061861 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 20 01:34:07.062104 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 20 01:34:07.062320 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 20 01:34:07.062546 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 20 01:34:07.062722 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 20 01:34:07.062942 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 01:34:07.065209 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 01:34:07.065429 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 01:34:07.065614 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 20 01:34:07.065796 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 20 01:34:07.066001 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 01:34:07.066189 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 20 01:34:07.069721 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 20 01:34:07.069954 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 20 01:34:07.070169 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 20 01:34:07.070341 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 20 01:34:07.070511 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 20 01:34:07.070721 kernel: pci_bus 0000:02: extended config space not accessible Jan 20 01:34:07.070956 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 20 01:34:07.071164 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 20 01:34:07.071353 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 20 01:34:07.071528 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 20 01:34:07.071743 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 20 01:34:07.071937 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 20 01:34:07.072138 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 20 01:34:07.072313 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 20 01:34:07.072510 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 20 01:34:07.072747 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 20 01:34:07.072985 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 20 01:34:07.074999 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 20 01:34:07.075237 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 20 01:34:07.075418 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 20 01:34:07.075594 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 20 01:34:07.075776 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 20 01:34:07.075948 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 20 01:34:07.076153 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 20 01:34:07.076324 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 20 01:34:07.076497 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 20 01:34:07.076678 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 20 01:34:07.076862 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 20 01:34:07.077029 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 20 01:34:07.077277 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 20 01:34:07.077447 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 20 01:34:07.077622 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 20 01:34:07.077810 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 20 01:34:07.077980 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 20 01:34:07.078172 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 20 01:34:07.078192 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 01:34:07.078206 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 01:34:07.078220 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 01:34:07.078233 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 01:34:07.078246 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 01:34:07.078267 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 01:34:07.078281 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 01:34:07.078294 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 01:34:07.078307 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 01:34:07.078319 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 01:34:07.078333 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 01:34:07.078346 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 01:34:07.078358 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 01:34:07.078371 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 01:34:07.078390 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 01:34:07.078403 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 01:34:07.078416 kernel: iommu: Default domain type: Translated Jan 20 01:34:07.078429 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 01:34:07.078442 kernel: PCI: Using ACPI for IRQ routing Jan 20 01:34:07.078455 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 01:34:07.078467 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 01:34:07.078480 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 20 01:34:07.078653 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 01:34:07.078848 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 01:34:07.079031 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 01:34:07.079052 kernel: vgaarb: loaded Jan 20 01:34:07.079065 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 01:34:07.079078 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:34:07.079135 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:34:07.079153 kernel: pnp: PnP ACPI init Jan 20 01:34:07.079363 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 01:34:07.079394 kernel: pnp: PnP ACPI: found 5 devices Jan 20 01:34:07.079408 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 01:34:07.079420 kernel: NET: Registered PF_INET protocol family Jan 20 01:34:07.079433 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:34:07.079446 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 20 01:34:07.079459 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:34:07.079472 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 20 01:34:07.079485 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 20 01:34:07.079504 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 20 01:34:07.079517 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 20 01:34:07.079530 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 20 01:34:07.079543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:34:07.079556 kernel: NET: Registered PF_XDP protocol family Jan 20 01:34:07.079769 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 20 01:34:07.079943 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 20 01:34:07.080165 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 20 01:34:07.080345 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 20 01:34:07.080513 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 20 01:34:07.080680 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 20 01:34:07.080861 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 20 01:34:07.081029 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 20 01:34:07.081212 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 20 01:34:07.081387 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 20 01:34:07.081553 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 20 01:34:07.081717 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 20 01:34:07.081899 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 20 01:34:07.082068 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 20 01:34:07.082261 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 20 01:34:07.082427 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 20 01:34:07.082603 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 20 01:34:07.082819 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 20 01:34:07.082989 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 20 01:34:07.083176 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 20 01:34:07.083345 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 20 01:34:07.083513 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 20 01:34:07.083681 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 20 01:34:07.083862 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 20 01:34:07.084048 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 20 01:34:07.084267 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 20 01:34:07.084444 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 20 01:34:07.084610 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 20 01:34:07.084788 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 20 01:34:07.084980 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 20 01:34:07.085190 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 20 01:34:07.085366 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 20 01:34:07.085533 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 20 01:34:07.085698 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 20 01:34:07.085878 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 20 01:34:07.086045 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 20 01:34:07.086239 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 20 01:34:07.086414 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 20 01:34:07.086581 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 20 01:34:07.086747 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 20 01:34:07.086935 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 20 01:34:07.087122 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 20 01:34:07.087294 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 20 01:34:07.087463 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 20 01:34:07.087631 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 20 01:34:07.087851 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 20 01:34:07.088021 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 20 01:34:07.088219 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 20 01:34:07.088405 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 20 01:34:07.088590 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 20 01:34:07.088796 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 01:34:07.088957 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 01:34:07.089130 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 01:34:07.089289 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 20 01:34:07.089454 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 01:34:07.089610 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 20 01:34:07.089817 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 20 01:34:07.089985 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 20 01:34:07.090197 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 20 01:34:07.090372 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 20 01:34:07.090546 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 20 01:34:07.090713 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 20 01:34:07.090886 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 20 01:34:07.091067 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 20 01:34:07.091304 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 20 01:34:07.091463 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 20 01:34:07.091642 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 20 01:34:07.091824 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 20 01:34:07.091984 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 20 01:34:07.092244 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 20 01:34:07.092407 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 20 01:34:07.092568 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 20 01:34:07.092784 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 20 01:34:07.092945 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 20 01:34:07.093130 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 20 01:34:07.093347 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 20 01:34:07.093508 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 20 01:34:07.093665 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 20 01:34:07.093870 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 20 01:34:07.094033 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 20 01:34:07.094218 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 20 01:34:07.094248 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 01:34:07.094262 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:34:07.094277 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 20 01:34:07.094291 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 20 01:34:07.094304 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 20 01:34:07.094318 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 20 01:34:07.094332 kernel: Initialise system trusted keyrings Jan 20 01:34:07.094346 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 20 01:34:07.094366 kernel: Key type asymmetric registered Jan 20 01:34:07.094380 kernel: Asymmetric key parser 'x509' registered Jan 20 01:34:07.094393 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 01:34:07.094407 kernel: io scheduler mq-deadline registered Jan 20 01:34:07.094420 kernel: io scheduler kyber registered Jan 20 01:34:07.094434 kernel: io scheduler bfq registered Jan 20 01:34:07.094605 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 20 01:34:07.094828 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 20 01:34:07.095003 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:34:07.095209 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 20 01:34:07.095378 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 20 01:34:07.095546 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:34:07.095716 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 20 01:34:07.095933 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 20 01:34:07.096126 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:34:07.096310 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 20 01:34:07.096480 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 20 01:34:07.096649 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:34:07.096836 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 20 01:34:07.097006 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 20 01:34:07.097248 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:34:07.097429 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 20 01:34:07.097597 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 20 01:34:07.097775 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:34:07.097947 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 20 01:34:07.098129 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 20 01:34:07.098297 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:34:07.098476 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 20 01:34:07.098651 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 20 01:34:07.098841 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:34:07.098863 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 01:34:07.098878 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 01:34:07.098892 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 01:34:07.098905 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:34:07.098927 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 01:34:07.098941 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 01:34:07.098955 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 01:34:07.098968 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 01:34:07.099161 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 20 01:34:07.099184 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 01:34:07.099339 kernel: rtc_cmos 00:03: registered as rtc0 Jan 20 01:34:07.099498 kernel: rtc_cmos 00:03: setting system clock to 2026-01-20T01:34:06 UTC (1768872846) Jan 20 01:34:07.099711 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 20 01:34:07.099737 kernel: intel_pstate: CPU model not supported Jan 20 01:34:07.099751 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:34:07.099777 kernel: Segment Routing with IPv6 Jan 20 01:34:07.099797 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:34:07.099811 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:34:07.099825 kernel: Key type dns_resolver registered Jan 20 01:34:07.099838 kernel: IPI shorthand broadcast: enabled Jan 20 01:34:07.099852 kernel: sched_clock: Marking stable (1339003754, 242664890)->(1710065621, -128396977) Jan 20 01:34:07.099873 kernel: registered taskstats version 1 Jan 20 01:34:07.099887 kernel: Loading compiled-in X.509 certificates Jan 20 01:34:07.099901 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 01:34:07.099914 kernel: Key type .fscrypt registered Jan 20 01:34:07.099928 kernel: Key type fscrypt-provisioning registered Jan 20 01:34:07.099942 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:34:07.099956 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:34:07.099969 kernel: ima: No architecture policies found Jan 20 01:34:07.099982 kernel: clk: Disabling unused clocks Jan 20 01:34:07.100002 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 01:34:07.100016 kernel: Write protecting the kernel read-only data: 36864k Jan 20 01:34:07.100029 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 01:34:07.100043 kernel: Run /init as init process Jan 20 01:34:07.100057 kernel: with arguments: Jan 20 01:34:07.100070 kernel: /init Jan 20 01:34:07.100083 kernel: with environment: Jan 20 01:34:07.100111 kernel: HOME=/ Jan 20 01:34:07.100125 kernel: TERM=linux Jan 20 01:34:07.100148 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 01:34:07.100166 systemd[1]: Detected virtualization kvm. Jan 20 01:34:07.100180 systemd[1]: Detected architecture x86-64. Jan 20 01:34:07.100194 systemd[1]: Running in initrd. Jan 20 01:34:07.100208 systemd[1]: No hostname configured, using default hostname. Jan 20 01:34:07.100222 systemd[1]: Hostname set to . Jan 20 01:34:07.100236 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:34:07.100257 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:34:07.100272 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:34:07.100286 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:34:07.100302 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:34:07.100317 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:34:07.100331 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:34:07.100346 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:34:07.100368 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:34:07.100383 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:34:07.100398 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:34:07.100412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:34:07.100432 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:34:07.100447 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:34:07.100461 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:34:07.100476 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:34:07.100495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:34:07.100510 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:34:07.100525 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:34:07.100539 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 01:34:07.100554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:34:07.100568 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:34:07.100583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:34:07.100597 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:34:07.100611 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:34:07.100631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:34:07.100646 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:34:07.100660 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:34:07.100674 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:34:07.100689 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:34:07.100703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:34:07.100779 systemd-journald[202]: Collecting audit messages is disabled. Jan 20 01:34:07.100819 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:34:07.100834 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:34:07.100849 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:34:07.100870 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:34:07.100886 systemd-journald[202]: Journal started Jan 20 01:34:07.100913 systemd-journald[202]: Runtime Journal (/run/log/journal/dc516f5d2e234f42be8f8cfe25e4ca8d) is 4.7M, max 38.0M, 33.2M free. Jan 20 01:34:07.069701 systemd-modules-load[203]: Inserted module 'overlay' Jan 20 01:34:07.162968 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:34:07.163008 kernel: Bridge firewalling registered Jan 20 01:34:07.117105 systemd-modules-load[203]: Inserted module 'br_netfilter' Jan 20 01:34:07.173117 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:34:07.173300 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:34:07.174349 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:34:07.178236 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:34:07.185361 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:34:07.197356 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:34:07.199298 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:34:07.204316 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:34:07.222509 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:34:07.232511 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:34:07.234052 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:34:07.235670 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:34:07.243368 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:34:07.247262 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:34:07.262121 dracut-cmdline[236]: dracut-dracut-053 Jan 20 01:34:07.268113 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 01:34:07.303848 systemd-resolved[238]: Positive Trust Anchors: Jan 20 01:34:07.303873 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:34:07.303920 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:34:07.308156 systemd-resolved[238]: Defaulting to hostname 'linux'. Jan 20 01:34:07.309997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:34:07.313602 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:34:07.383187 kernel: SCSI subsystem initialized Jan 20 01:34:07.395140 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:34:07.409158 kernel: iscsi: registered transport (tcp) Jan 20 01:34:07.436166 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:34:07.436270 kernel: QLogic iSCSI HBA Driver Jan 20 01:34:07.494492 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:34:07.500325 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:34:07.542424 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:34:07.542688 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:34:07.545145 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 01:34:07.597189 kernel: raid6: sse2x4 gen() 7942 MB/s Jan 20 01:34:07.615174 kernel: raid6: sse2x2 gen() 5603 MB/s Jan 20 01:34:07.633949 kernel: raid6: sse2x1 gen() 5566 MB/s Jan 20 01:34:07.634055 kernel: raid6: using algorithm sse2x4 gen() 7942 MB/s Jan 20 01:34:07.652950 kernel: raid6: .... xor() 5073 MB/s, rmw enabled Jan 20 01:34:07.653033 kernel: raid6: using ssse3x2 recovery algorithm Jan 20 01:34:07.679137 kernel: xor: automatically using best checksumming function avx Jan 20 01:34:07.877137 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:34:07.892722 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:34:07.900415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:34:07.930925 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jan 20 01:34:07.937976 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:34:07.947314 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:34:07.970201 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Jan 20 01:34:08.013567 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:34:08.019283 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:34:08.139537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:34:08.148036 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:34:08.182453 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:34:08.187774 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:34:08.190373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:34:08.192185 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:34:08.201530 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:34:08.231835 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:34:08.293872 kernel: ACPI: bus type USB registered Jan 20 01:34:08.293958 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 01:34:08.298116 kernel: usbcore: registered new interface driver usbfs Jan 20 01:34:08.301598 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 20 01:34:08.301915 kernel: usbcore: registered new interface driver hub Jan 20 01:34:08.308118 kernel: usbcore: registered new device driver usb Jan 20 01:34:08.314864 kernel: AVX version of gcm_enc/dec engaged. Jan 20 01:34:08.314915 kernel: AES CTR mode by8 optimization enabled Jan 20 01:34:08.319325 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 20 01:34:08.320280 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:34:08.321322 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:34:08.324336 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:34:08.325080 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:34:08.325275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:34:08.327784 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:34:08.342061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:34:08.359877 kernel: libata version 3.00 loaded. Jan 20 01:34:08.359913 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 01:34:08.359934 kernel: GPT:17805311 != 125829119 Jan 20 01:34:08.359953 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 01:34:08.359971 kernel: GPT:17805311 != 125829119 Jan 20 01:34:08.359999 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 01:34:08.360019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:34:08.370085 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 01:34:08.370385 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 01:34:08.375116 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 01:34:08.375402 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 01:34:08.381126 kernel: scsi host0: ahci Jan 20 01:34:08.383808 kernel: scsi host1: ahci Jan 20 01:34:08.385109 kernel: scsi host2: ahci Jan 20 01:34:08.387520 kernel: scsi host3: ahci Jan 20 01:34:08.387768 kernel: scsi host4: ahci Jan 20 01:34:08.396123 kernel: scsi host5: ahci Jan 20 01:34:08.396364 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 20 01:34:08.396412 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 20 01:34:08.396442 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 20 01:34:08.396461 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 20 01:34:08.396478 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 20 01:34:08.397791 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 20 01:34:08.446142 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Jan 20 01:34:08.449114 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (478) Jan 20 01:34:08.470332 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 01:34:08.510136 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:34:08.517910 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 01:34:08.529105 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 01:34:08.529916 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 01:34:08.537995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:34:08.552356 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:34:08.557512 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:34:08.560437 disk-uuid[560]: Primary Header is updated. Jan 20 01:34:08.560437 disk-uuid[560]: Secondary Entries is updated. Jan 20 01:34:08.560437 disk-uuid[560]: Secondary Header is updated. Jan 20 01:34:08.566133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:34:08.574161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:34:08.601619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:34:08.706137 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:08.709512 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:08.709560 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:08.710143 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:08.712946 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:08.715114 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 20 01:34:08.744820 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 20 01:34:08.745136 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 20 01:34:08.749205 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 20 01:34:08.753565 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 20 01:34:08.753848 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 20 01:34:08.755872 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 20 01:34:08.758822 kernel: hub 1-0:1.0: USB hub found Jan 20 01:34:08.759088 kernel: hub 1-0:1.0: 4 ports detected Jan 20 01:34:08.762115 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 20 01:34:08.764780 kernel: hub 2-0:1.0: USB hub found Jan 20 01:34:08.765022 kernel: hub 2-0:1.0: 4 ports detected Jan 20 01:34:08.997127 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 20 01:34:09.138119 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 01:34:09.145165 kernel: usbcore: registered new interface driver usbhid Jan 20 01:34:09.145208 kernel: usbhid: USB HID core driver Jan 20 01:34:09.152361 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 20 01:34:09.152403 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 20 01:34:09.575626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:34:09.575733 disk-uuid[561]: The operation has completed successfully. Jan 20 01:34:09.641407 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:34:09.641582 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:34:09.660355 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:34:09.666623 sh[587]: Success Jan 20 01:34:09.684137 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 20 01:34:09.742958 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:34:09.752533 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:34:09.756776 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:34:09.791957 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 01:34:09.792027 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:34:09.792048 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 01:34:09.792079 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:34:09.794169 kernel: BTRFS info (device dm-0): using free space tree Jan 20 01:34:09.805992 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:34:09.807548 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:34:09.812278 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:34:09.814271 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:34:09.838143 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 01:34:09.838219 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:34:09.838241 kernel: BTRFS info (device vda6): using free space tree Jan 20 01:34:09.845123 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 01:34:09.868121 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 01:34:09.868029 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 01:34:09.876700 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:34:09.883292 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:34:09.953630 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:34:09.968402 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:34:09.995862 systemd-networkd[768]: lo: Link UP Jan 20 01:34:09.995876 systemd-networkd[768]: lo: Gained carrier Jan 20 01:34:09.999184 systemd-networkd[768]: Enumeration completed Jan 20 01:34:09.999317 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:34:10.000618 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:34:10.000624 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:34:10.001349 systemd[1]: Reached target network.target - Network. Jan 20 01:34:10.005214 systemd-networkd[768]: eth0: Link UP Jan 20 01:34:10.005221 systemd-networkd[768]: eth0: Gained carrier Jan 20 01:34:10.005240 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:34:10.027627 systemd-networkd[768]: eth0: DHCPv4 address 10.230.15.2/30, gateway 10.230.15.1 acquired from 10.230.15.1 Jan 20 01:34:10.051542 ignition[697]: Ignition 2.19.0 Jan 20 01:34:10.051575 ignition[697]: Stage: fetch-offline Jan 20 01:34:10.054354 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:34:10.051706 ignition[697]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:34:10.051733 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:34:10.051948 ignition[697]: parsed url from cmdline: "" Jan 20 01:34:10.051956 ignition[697]: no config URL provided Jan 20 01:34:10.051966 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:34:10.051982 ignition[697]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:34:10.051992 ignition[697]: failed to fetch config: resource requires networking Jan 20 01:34:10.052318 ignition[697]: Ignition finished successfully Jan 20 01:34:10.062419 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 01:34:10.086883 ignition[777]: Ignition 2.19.0 Jan 20 01:34:10.088145 ignition[777]: Stage: fetch Jan 20 01:34:10.088438 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:34:10.088459 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:34:10.088659 ignition[777]: parsed url from cmdline: "" Jan 20 01:34:10.088667 ignition[777]: no config URL provided Jan 20 01:34:10.088699 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:34:10.088716 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:34:10.088859 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 20 01:34:10.088927 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 20 01:34:10.088967 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 20 01:34:10.105120 ignition[777]: GET result: OK Jan 20 01:34:10.105998 ignition[777]: parsing config with SHA512: 1d56c1d9a7b1afff6345471ff0c2f789204563dbbf5f7cd9e132c502bad8d73180077c1327274583b9152082c64d84c91bf36e751177b96fe20f393ee7f34013 Jan 20 01:34:10.112950 unknown[777]: fetched base config from "system" Jan 20 01:34:10.113945 unknown[777]: fetched base config from "system" Jan 20 01:34:10.114748 unknown[777]: fetched user config from "openstack" Jan 20 01:34:10.115702 ignition[777]: fetch: fetch complete Jan 20 01:34:10.115713 ignition[777]: fetch: fetch passed Jan 20 01:34:10.115810 ignition[777]: Ignition finished successfully Jan 20 01:34:10.118042 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 01:34:10.123323 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:34:10.161427 ignition[783]: Ignition 2.19.0 Jan 20 01:34:10.161451 ignition[783]: Stage: kargs Jan 20 01:34:10.161767 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:34:10.161788 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:34:10.162948 ignition[783]: kargs: kargs passed Jan 20 01:34:10.163032 ignition[783]: Ignition finished successfully Jan 20 01:34:10.167179 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:34:10.179381 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:34:10.199984 ignition[789]: Ignition 2.19.0 Jan 20 01:34:10.200022 ignition[789]: Stage: disks Jan 20 01:34:10.200384 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:34:10.204124 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:34:10.200405 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:34:10.201565 ignition[789]: disks: disks passed Jan 20 01:34:10.206868 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:34:10.201644 ignition[789]: Ignition finished successfully Jan 20 01:34:10.207748 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:34:10.209081 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:34:10.210727 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:34:10.212148 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:34:10.223399 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:34:10.249051 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 20 01:34:10.253916 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:34:10.263316 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:34:10.385429 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 01:34:10.386558 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:34:10.388627 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:34:10.397247 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:34:10.402215 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:34:10.404291 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 01:34:10.411615 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 20 01:34:10.412751 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (805) Jan 20 01:34:10.415404 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:34:10.416533 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:34:10.420820 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 01:34:10.420862 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:34:10.420883 kernel: BTRFS info (device vda6): using free space tree Jan 20 01:34:10.429792 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:34:10.438797 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 01:34:10.440847 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:34:10.449200 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:34:10.500181 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:34:10.510460 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:34:10.519483 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:34:10.527921 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:34:10.642358 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:34:10.649235 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:34:10.651329 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:34:10.669133 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 01:34:10.699987 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:34:10.719453 ignition[923]: INFO : Ignition 2.19.0 Jan 20 01:34:10.719453 ignition[923]: INFO : Stage: mount Jan 20 01:34:10.722487 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:34:10.722487 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:34:10.722487 ignition[923]: INFO : mount: mount passed Jan 20 01:34:10.722487 ignition[923]: INFO : Ignition finished successfully Jan 20 01:34:10.722609 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:34:10.784258 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:34:11.441498 systemd-networkd[768]: eth0: Gained IPv6LL Jan 20 01:34:12.065174 systemd-networkd[768]: eth0: Ignoring DHCPv6 address 2a02:1348:179:83c0:24:19ff:fee6:f02/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:83c0:24:19ff:fee6:f02/64 assigned by NDisc. Jan 20 01:34:12.065188 systemd-networkd[768]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 20 01:34:17.598138 coreos-metadata[807]: Jan 20 01:34:17.598 WARN failed to locate config-drive, using the metadata service API instead Jan 20 01:34:17.623656 coreos-metadata[807]: Jan 20 01:34:17.623 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 20 01:34:17.641004 coreos-metadata[807]: Jan 20 01:34:17.640 INFO Fetch successful Jan 20 01:34:17.642028 coreos-metadata[807]: Jan 20 01:34:17.641 INFO wrote hostname srv-nmle2.gb1.brightbox.com to /sysroot/etc/hostname Jan 20 01:34:17.643903 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 20 01:34:17.644127 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 20 01:34:17.654295 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:34:17.670339 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:34:17.683135 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Jan 20 01:34:17.689183 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 01:34:17.689226 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:34:17.689247 kernel: BTRFS info (device vda6): using free space tree Jan 20 01:34:17.711137 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 01:34:17.713811 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:34:17.745139 ignition[956]: INFO : Ignition 2.19.0 Jan 20 01:34:17.747506 ignition[956]: INFO : Stage: files Jan 20 01:34:17.747506 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:34:17.747506 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:34:17.750045 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:34:17.752103 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:34:17.752103 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:34:17.756957 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:34:17.758157 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:34:17.759745 unknown[956]: wrote ssh authorized keys file for user: core Jan 20 01:34:17.760853 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:34:17.762162 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:34:17.763481 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 01:34:17.980751 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:34:18.261212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:34:18.261212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:34:18.261212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:34:18.261212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:34:18.261212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:34:18.261212 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:34:18.274711 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:34:18.274711 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:34:18.274711 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:34:18.274711 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:34:18.274711 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:34:18.274711 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:34:18.274711 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:34:18.274711 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:34:18.274711 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 20 01:34:18.566000 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:34:20.526748 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:34:20.526748 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:34:20.535133 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:34:20.538171 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:34:20.538171 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:34:20.538171 ignition[956]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:34:20.538171 ignition[956]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:34:20.538171 ignition[956]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:34:20.538171 ignition[956]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:34:20.538171 ignition[956]: INFO : files: files passed Jan 20 01:34:20.538171 ignition[956]: INFO : Ignition finished successfully Jan 20 01:34:20.541412 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:34:20.556484 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:34:20.566437 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:34:20.571337 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:34:20.571564 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:34:20.586294 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:34:20.588168 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:34:20.589579 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:34:20.591038 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:34:20.593637 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:34:20.601424 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:34:20.645990 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:34:20.646225 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:34:20.648461 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:34:20.649617 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:34:20.651427 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:34:20.661432 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:34:20.680951 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:34:20.687307 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:34:20.711789 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:34:20.712832 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:34:20.714768 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:34:20.716271 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:34:20.716489 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:34:20.718356 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:34:20.719320 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:34:20.720964 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:34:20.722468 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:34:20.723913 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:34:20.725652 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:34:20.727291 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:34:20.728958 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:34:20.730557 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:34:20.732165 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:34:20.733560 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:34:20.733732 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:34:20.735514 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:34:20.736481 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:34:20.737950 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:34:20.738394 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:34:20.739674 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:34:20.739848 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:34:20.741872 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:34:20.742045 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:34:20.743151 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:34:20.743373 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:34:20.751506 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:34:20.753316 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:34:20.753541 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:34:20.758351 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:34:20.759808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:34:20.760015 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:34:20.764370 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:34:20.764571 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:34:20.775096 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:34:20.777210 ignition[1009]: INFO : Ignition 2.19.0 Jan 20 01:34:20.777210 ignition[1009]: INFO : Stage: umount Jan 20 01:34:20.777210 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:34:20.777210 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:34:20.777210 ignition[1009]: INFO : umount: umount passed Jan 20 01:34:20.777210 ignition[1009]: INFO : Ignition finished successfully Jan 20 01:34:20.775282 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:34:20.778832 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:34:20.778982 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:34:20.787856 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:34:20.787991 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:34:20.789067 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:34:20.789151 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:34:20.790762 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 01:34:20.790862 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 01:34:20.791868 systemd[1]: Stopped target network.target - Network. Jan 20 01:34:20.795130 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:34:20.795224 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:34:20.796016 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:34:20.797882 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:34:20.801176 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:34:20.802174 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:34:20.802801 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:34:20.805246 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:34:20.805323 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:34:20.806045 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:34:20.806906 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:34:20.808251 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:34:20.808332 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:34:20.809345 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:34:20.809442 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:34:20.812642 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:34:20.814139 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:34:20.817593 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:34:20.817781 systemd-networkd[768]: eth0: DHCPv6 lease lost Jan 20 01:34:20.822404 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:34:20.822590 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:34:20.824711 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:34:20.824822 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:34:20.838261 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:34:20.839406 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:34:20.839520 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:34:20.840494 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:34:20.842966 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:34:20.843174 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:34:20.855740 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:34:20.855999 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:34:20.862073 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:34:20.862219 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:34:20.866197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:34:20.866315 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:34:20.868293 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:34:20.868423 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:34:20.870857 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:34:20.870934 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:34:20.872005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:34:20.872081 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:34:20.880371 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:34:20.881202 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:34:20.881280 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:34:20.882041 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:34:20.882130 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:34:20.885530 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:34:20.885599 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:34:20.887070 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:34:20.887199 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:34:20.889551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:34:20.889634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:34:20.893178 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:34:20.893328 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:34:20.899798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:34:20.899943 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:34:20.960857 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:34:20.961052 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:34:20.964656 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:34:20.965449 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:34:20.965531 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:34:20.971284 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:34:20.983422 systemd[1]: Switching root. Jan 20 01:34:21.021672 systemd-journald[202]: Journal stopped Jan 20 01:34:22.646289 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Jan 20 01:34:22.646438 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:34:22.646477 kernel: SELinux: policy capability open_perms=1 Jan 20 01:34:22.646499 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:34:22.646517 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:34:22.646543 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:34:22.646564 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:34:22.646589 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:34:22.646609 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:34:22.646640 kernel: audit: type=1403 audit(1768872861.308:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:34:22.646669 systemd[1]: Successfully loaded SELinux policy in 53.510ms. Jan 20 01:34:22.646700 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.674ms. Jan 20 01:34:22.646723 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 01:34:22.646744 systemd[1]: Detected virtualization kvm. Jan 20 01:34:22.646773 systemd[1]: Detected architecture x86-64. Jan 20 01:34:22.646793 systemd[1]: Detected first boot. Jan 20 01:34:22.646813 systemd[1]: Hostname set to . Jan 20 01:34:22.646847 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:34:22.646869 zram_generator::config[1052]: No configuration found. Jan 20 01:34:22.646891 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:34:22.646911 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:34:22.646941 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:34:22.646963 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:34:22.646984 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:34:22.647006 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:34:22.647026 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:34:22.647058 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:34:22.647080 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:34:22.650629 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:34:22.650660 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:34:22.650682 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:34:22.650734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:34:22.650757 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:34:22.650779 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:34:22.650818 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:34:22.650842 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:34:22.650864 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:34:22.650886 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 01:34:22.650908 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:34:22.650928 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:34:22.650949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:34:22.650983 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:34:22.651045 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:34:22.651069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:34:22.651089 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:34:22.656151 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:34:22.656176 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:34:22.656208 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:34:22.656268 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:34:22.656294 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:34:22.656344 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:34:22.656368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:34:22.656389 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:34:22.656410 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:34:22.656431 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:34:22.656474 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:34:22.656514 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:34:22.656537 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:34:22.656558 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:34:22.656579 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:34:22.656600 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:34:22.656621 systemd[1]: Reached target machines.target - Containers. Jan 20 01:34:22.656641 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:34:22.656664 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:34:22.656698 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:34:22.656720 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:34:22.656741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:34:22.656761 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:34:22.656782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:34:22.656803 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:34:22.656824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:34:22.656845 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:34:22.656878 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:34:22.656901 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:34:22.656922 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:34:22.656942 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:34:22.656962 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:34:22.656982 kernel: loop: module loaded Jan 20 01:34:22.657002 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:34:22.657022 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:34:22.657043 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:34:22.657075 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:34:22.659729 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 01:34:22.659761 systemd[1]: Stopped verity-setup.service. Jan 20 01:34:22.659791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:34:22.659813 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:34:22.659833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:34:22.659853 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:34:22.659874 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:34:22.659912 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:34:22.659936 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:34:22.659957 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:34:22.659985 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:34:22.660006 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:34:22.660028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:34:22.660062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:34:22.660084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:34:22.660134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:34:22.660175 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:34:22.660198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:34:22.660278 systemd-journald[1141]: Collecting audit messages is disabled. Jan 20 01:34:22.660345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:34:22.660376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:34:22.660399 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:34:22.660419 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:34:22.660440 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:34:22.660461 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:34:22.660496 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:34:22.660518 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 01:34:22.660540 systemd-journald[1141]: Journal started Jan 20 01:34:22.660574 systemd-journald[1141]: Runtime Journal (/run/log/journal/dc516f5d2e234f42be8f8cfe25e4ca8d) is 4.7M, max 38.0M, 33.2M free. Jan 20 01:34:22.178203 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:34:22.207067 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 01:34:22.207773 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:34:22.673150 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:34:22.694178 kernel: ACPI: bus type drm_connector registered Jan 20 01:34:22.694367 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:34:22.694432 kernel: fuse: init (API version 7.39) Jan 20 01:34:22.694463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:34:22.708763 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:34:22.715505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:34:22.727902 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:34:22.727964 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:34:22.750697 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:34:22.765147 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:34:22.775613 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:34:22.783190 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:34:22.788323 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:34:22.789076 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:34:22.790924 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:34:22.792066 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:34:22.799455 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:34:22.800717 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:34:22.829639 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:34:22.853795 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:34:22.870397 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:34:22.882251 kernel: loop0: detected capacity change from 0 to 8 Jan 20 01:34:22.884251 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:34:22.898424 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:34:22.910362 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 01:34:22.923274 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:34:22.923932 systemd-journald[1141]: Time spent on flushing to /var/log/journal/dc516f5d2e234f42be8f8cfe25e4ca8d is 125.457ms for 1140 entries. Jan 20 01:34:22.923932 systemd-journald[1141]: System Journal (/var/log/journal/dc516f5d2e234f42be8f8cfe25e4ca8d) is 8.0M, max 584.8M, 576.8M free. Jan 20 01:34:23.091991 systemd-journald[1141]: Received client request to flush runtime journal. Jan 20 01:34:23.092200 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:34:23.092270 kernel: loop1: detected capacity change from 0 to 219144 Jan 20 01:34:23.092317 kernel: loop2: detected capacity change from 0 to 140768 Jan 20 01:34:22.932804 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:34:23.036577 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:34:23.039442 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 01:34:23.100197 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:34:23.102343 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:34:23.103975 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:34:23.122348 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:34:23.134303 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 01:34:23.172038 kernel: loop3: detected capacity change from 0 to 142488 Jan 20 01:34:23.189496 udevadm[1205]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 20 01:34:23.232059 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 20 01:34:23.232888 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Jan 20 01:34:23.253496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:34:23.256165 kernel: loop4: detected capacity change from 0 to 8 Jan 20 01:34:23.262394 kernel: loop5: detected capacity change from 0 to 219144 Jan 20 01:34:23.287507 kernel: loop6: detected capacity change from 0 to 140768 Jan 20 01:34:23.307035 kernel: loop7: detected capacity change from 0 to 142488 Jan 20 01:34:23.347416 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 20 01:34:23.351524 (sd-merge)[1209]: Merged extensions into '/usr'. Jan 20 01:34:23.359910 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:34:23.360387 systemd[1]: Reloading... Jan 20 01:34:23.553147 zram_generator::config[1240]: No configuration found. Jan 20 01:34:23.707311 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:34:23.774746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:34:23.844577 systemd[1]: Reloading finished in 483 ms. Jan 20 01:34:23.906183 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:34:23.907638 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:34:23.908880 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:34:23.920380 systemd[1]: Starting ensure-sysext.service... Jan 20 01:34:23.923351 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:34:23.929313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:34:23.940355 systemd[1]: Reloading requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:34:23.940374 systemd[1]: Reloading... Jan 20 01:34:23.994965 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:34:23.995625 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:34:24.001190 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:34:24.001620 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jan 20 01:34:24.001738 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jan 20 01:34:24.005885 systemd-udevd[1296]: Using default interface naming scheme 'v255'. Jan 20 01:34:24.014518 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:34:24.015721 systemd-tmpfiles[1295]: Skipping /boot Jan 20 01:34:24.063997 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:34:24.066458 systemd-tmpfiles[1295]: Skipping /boot Jan 20 01:34:24.069123 zram_generator::config[1321]: No configuration found. Jan 20 01:34:24.272190 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1328) Jan 20 01:34:24.350326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:34:24.400363 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 01:34:24.430621 kernel: ACPI: button: Power Button [PWRF] Jan 20 01:34:24.443138 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:34:24.446554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:34:24.448913 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 01:34:24.450051 systemd[1]: Reloading finished in 509 ms. Jan 20 01:34:24.478986 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:34:24.488765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:34:24.528144 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 01:34:24.528632 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 20 01:34:24.537782 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 01:34:24.538136 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 01:34:24.585945 systemd[1]: Finished ensure-sysext.service. Jan 20 01:34:24.597694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:34:24.606446 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 01:34:24.612178 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:34:24.613728 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:34:24.623878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:34:24.627570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:34:24.631778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:34:24.635216 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:34:24.636712 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:34:24.638333 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:34:24.641559 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:34:24.651438 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:34:24.657435 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:34:24.671462 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 01:34:24.675402 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:34:24.684213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:34:24.685029 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:34:24.723338 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:34:24.743158 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:34:24.749016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:34:24.750424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:34:24.811352 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:34:24.838177 augenrules[1437]: No rules Jan 20 01:34:24.843770 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 01:34:24.846399 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:34:24.847673 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:34:24.852690 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:34:24.861412 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:34:24.862192 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:34:24.868632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:34:24.870180 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:34:24.871824 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:34:24.876225 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:34:24.888348 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:34:24.890598 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:34:24.893394 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:34:24.940204 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:34:25.071869 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:34:25.076955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:34:25.124739 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 01:34:25.126793 systemd-networkd[1413]: lo: Link UP Jan 20 01:34:25.127261 systemd-networkd[1413]: lo: Gained carrier Jan 20 01:34:25.129775 systemd-networkd[1413]: Enumeration completed Jan 20 01:34:25.130826 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:34:25.130950 systemd-networkd[1413]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:34:25.136288 systemd-networkd[1413]: eth0: Link UP Jan 20 01:34:25.136398 systemd-networkd[1413]: eth0: Gained carrier Jan 20 01:34:25.136515 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:34:25.137387 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 01:34:25.138376 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:34:25.144257 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:34:25.160822 systemd-resolved[1417]: Positive Trust Anchors: Jan 20 01:34:25.160854 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:34:25.160901 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:34:25.162250 systemd-networkd[1413]: eth0: DHCPv4 address 10.230.15.2/30, gateway 10.230.15.1 acquired from 10.230.15.1 Jan 20 01:34:25.165645 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 01:34:25.168178 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 01:34:25.167398 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:34:25.179181 systemd-resolved[1417]: Using system hostname 'srv-nmle2.gb1.brightbox.com'. Jan 20 01:34:25.182007 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:34:25.183085 systemd[1]: Reached target network.target - Network. Jan 20 01:34:25.183825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:34:25.203566 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 01:34:25.204872 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:34:25.205681 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:34:25.206579 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:34:25.207605 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:34:25.214737 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:34:25.215718 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:34:25.216582 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:34:25.217387 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:34:25.217444 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:34:25.218117 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:34:25.221196 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:34:25.224041 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:34:25.230346 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:34:25.232877 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 01:34:25.234366 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:34:25.235233 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:34:25.235919 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:34:25.236643 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:34:25.236695 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:34:25.243325 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:34:25.248349 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 01:34:25.252313 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 01:34:25.260381 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:34:25.263184 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:34:25.273892 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:34:25.275140 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:34:25.285436 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:34:25.287608 jq[1473]: false Jan 20 01:34:25.293230 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:34:25.298350 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:34:25.303360 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:34:25.317366 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:34:25.319650 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:34:25.320841 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:34:25.323648 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:34:25.333216 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:34:25.336032 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 01:34:25.340604 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:34:25.340854 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:34:25.351615 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:34:25.353209 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:34:25.355741 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:34:25.357205 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:34:25.369183 extend-filesystems[1474]: Found loop4 Jan 20 01:34:25.394110 extend-filesystems[1474]: Found loop5 Jan 20 01:34:25.394110 extend-filesystems[1474]: Found loop6 Jan 20 01:34:25.394110 extend-filesystems[1474]: Found loop7 Jan 20 01:34:25.394110 extend-filesystems[1474]: Found vda Jan 20 01:34:25.394110 extend-filesystems[1474]: Found vda1 Jan 20 01:34:25.394110 extend-filesystems[1474]: Found vda2 Jan 20 01:34:25.394110 extend-filesystems[1474]: Found vda3 Jan 20 01:34:25.394110 extend-filesystems[1474]: Found usr Jan 20 01:34:25.394110 extend-filesystems[1474]: Found vda4 Jan 20 01:34:25.394110 extend-filesystems[1474]: Found vda6 Jan 20 01:34:25.394110 extend-filesystems[1474]: Found vda7 Jan 20 01:34:25.394110 extend-filesystems[1474]: Found vda9 Jan 20 01:34:25.394110 extend-filesystems[1474]: Checking size of /dev/vda9 Jan 20 01:34:25.980206 systemd-timesyncd[1420]: Contacted time server 82.219.4.30:123 (0.flatcar.pool.ntp.org). Jan 20 01:34:25.994632 dbus-daemon[1472]: [system] SELinux support is enabled Jan 20 01:34:25.980290 systemd-timesyncd[1420]: Initial clock synchronization to Tue 2026-01-20 01:34:25.979742 UTC. Jan 20 01:34:25.981762 systemd-resolved[1417]: Clock change detected. Flushing caches. Jan 20 01:34:26.019406 jq[1489]: true Jan 20 01:34:25.997459 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:34:26.002422 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 01:34:26.003696 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:34:26.003756 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:34:26.008290 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:34:26.008324 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:34:26.028280 dbus-daemon[1472]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1413 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 20 01:34:26.029839 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 01:34:26.036420 tar[1494]: linux-amd64/LICENSE Jan 20 01:34:26.036420 tar[1494]: linux-amd64/helm Jan 20 01:34:26.036825 extend-filesystems[1474]: Resized partition /dev/vda9 Jan 20 01:34:26.042149 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 20 01:34:26.043975 update_engine[1486]: I20260120 01:34:26.043468 1486 main.cc:92] Flatcar Update Engine starting Jan 20 01:34:26.059012 extend-filesystems[1513]: resize2fs 1.47.1 (20-May-2024) Jan 20 01:34:26.064065 update_engine[1486]: I20260120 01:34:26.061905 1486 update_check_scheduler.cc:74] Next update check in 5m0s Jan 20 01:34:26.062038 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:34:26.071168 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:34:26.074676 jq[1510]: true Jan 20 01:34:26.088571 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 20 01:34:26.088679 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1329) Jan 20 01:34:26.258661 systemd-logind[1481]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 01:34:26.259469 systemd-logind[1481]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:34:26.262195 systemd-logind[1481]: New seat seat0. Jan 20 01:34:26.268250 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:34:26.322832 bash[1531]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:34:26.324489 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:34:26.333310 systemd[1]: Starting sshkeys.service... Jan 20 01:34:26.399539 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 20 01:34:26.411500 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 20 01:34:26.420164 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 20 01:34:26.421485 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 20 01:34:26.430110 dbus-daemon[1472]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1512 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 20 01:34:26.442813 systemd[1]: Starting polkit.service - Authorization Manager... Jan 20 01:34:26.507964 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 20 01:34:26.508976 polkitd[1541]: Started polkitd version 121 Jan 20 01:34:26.524589 polkitd[1541]: Loading rules from directory /etc/polkit-1/rules.d Jan 20 01:34:26.524702 polkitd[1541]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 20 01:34:26.528547 polkitd[1541]: Finished loading, compiling and executing 2 rules Jan 20 01:34:26.534274 extend-filesystems[1513]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 01:34:26.534274 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 20 01:34:26.534274 extend-filesystems[1513]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 20 01:34:26.542483 extend-filesystems[1474]: Resized filesystem in /dev/vda9 Jan 20 01:34:26.537199 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:34:26.546123 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 20 01:34:26.539130 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:34:26.549752 systemd[1]: Started polkit.service - Authorization Manager. Jan 20 01:34:26.553151 polkitd[1541]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 20 01:34:26.563969 containerd[1504]: time="2026-01-20T01:34:26.561929958Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 01:34:26.582387 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:34:26.585885 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:34:26.594900 systemd-hostnamed[1512]: Hostname set to (static) Jan 20 01:34:26.624748 containerd[1504]: time="2026-01-20T01:34:26.624447150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.628897880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.628952514Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.628978256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.629274941Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.629309515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.629423308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.629445575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.629668826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.629693173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.629713517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:34:26.630596 containerd[1504]: time="2026-01-20T01:34:26.629731220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 01:34:26.631038 containerd[1504]: time="2026-01-20T01:34:26.629854702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:34:26.633332 containerd[1504]: time="2026-01-20T01:34:26.633300894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:34:26.633480 containerd[1504]: time="2026-01-20T01:34:26.633450095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:34:26.633527 containerd[1504]: time="2026-01-20T01:34:26.633481975Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 01:34:26.633645 containerd[1504]: time="2026-01-20T01:34:26.633618342Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 01:34:26.633732 containerd[1504]: time="2026-01-20T01:34:26.633708394Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:34:26.640895 containerd[1504]: time="2026-01-20T01:34:26.640851837Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 01:34:26.641070 containerd[1504]: time="2026-01-20T01:34:26.641003221Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 01:34:26.643261 containerd[1504]: time="2026-01-20T01:34:26.642960492Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 01:34:26.643261 containerd[1504]: time="2026-01-20T01:34:26.643003203Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 01:34:26.643261 containerd[1504]: time="2026-01-20T01:34:26.643056948Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 01:34:26.643392 containerd[1504]: time="2026-01-20T01:34:26.643346320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.643825388Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644050003Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644089316Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644114358Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644137311Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644158443Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644178240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644199360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644221059Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644242555Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644262396Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644279742Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644324556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647411 containerd[1504]: time="2026-01-20T01:34:26.644352023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644372584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644394391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644413783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644448800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644471243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644496873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644518895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644540982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644559039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644581923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644604582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644632034Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644669281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644693066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.647863 containerd[1504]: time="2026-01-20T01:34:26.644723959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 01:34:26.648396 containerd[1504]: time="2026-01-20T01:34:26.644796521Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 01:34:26.648396 containerd[1504]: time="2026-01-20T01:34:26.644831255Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 01:34:26.648396 containerd[1504]: time="2026-01-20T01:34:26.644851078Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 01:34:26.648396 containerd[1504]: time="2026-01-20T01:34:26.644869804Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 01:34:26.648396 containerd[1504]: time="2026-01-20T01:34:26.644885661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.648396 containerd[1504]: time="2026-01-20T01:34:26.644910911Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 01:34:26.648396 containerd[1504]: time="2026-01-20T01:34:26.644956389Z" level=info msg="NRI interface is disabled by configuration." Jan 20 01:34:26.648396 containerd[1504]: time="2026-01-20T01:34:26.644979017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 01:34:26.648647 containerd[1504]: time="2026-01-20T01:34:26.645375617Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 01:34:26.648647 containerd[1504]: time="2026-01-20T01:34:26.645456827Z" level=info msg="Connect containerd service" Jan 20 01:34:26.648647 containerd[1504]: time="2026-01-20T01:34:26.645508204Z" level=info msg="using legacy CRI server" Jan 20 01:34:26.648647 containerd[1504]: time="2026-01-20T01:34:26.645523946Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:34:26.648647 containerd[1504]: time="2026-01-20T01:34:26.645679417Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 01:34:26.650774 containerd[1504]: time="2026-01-20T01:34:26.650740304Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:34:26.653018 containerd[1504]: time="2026-01-20T01:34:26.652125849Z" level=info msg="Start subscribing containerd event" Jan 20 01:34:26.653018 containerd[1504]: time="2026-01-20T01:34:26.652205232Z" level=info msg="Start recovering state" Jan 20 01:34:26.653018 containerd[1504]: time="2026-01-20T01:34:26.652308349Z" level=info msg="Start event monitor" Jan 20 01:34:26.653018 containerd[1504]: time="2026-01-20T01:34:26.652341802Z" level=info msg="Start snapshots syncer" Jan 20 01:34:26.653018 containerd[1504]: time="2026-01-20T01:34:26.652362484Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:34:26.653018 containerd[1504]: time="2026-01-20T01:34:26.652375568Z" level=info msg="Start streaming server" Jan 20 01:34:26.655962 containerd[1504]: time="2026-01-20T01:34:26.655483048Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:34:26.655962 containerd[1504]: time="2026-01-20T01:34:26.655569601Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:34:26.655962 containerd[1504]: time="2026-01-20T01:34:26.655673949Z" level=info msg="containerd successfully booted in 0.096447s" Jan 20 01:34:26.655786 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:34:26.671610 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:34:26.706845 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:34:26.719854 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:34:26.724370 systemd[1]: Started sshd@0-10.230.15.2:22-20.161.92.111:47398.service - OpenSSH per-connection server daemon (20.161.92.111:47398). Jan 20 01:34:26.737794 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:34:26.738447 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:34:26.749001 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:34:26.772757 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:34:26.782464 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:34:26.791451 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 01:34:26.794373 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:34:27.100047 tar[1494]: linux-amd64/README.md Jan 20 01:34:27.113420 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:34:27.325107 sshd[1571]: Accepted publickey for core from 20.161.92.111 port 47398 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:34:27.327248 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:27.342608 systemd-logind[1481]: New session 1 of user core. Jan 20 01:34:27.345158 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:34:27.356447 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:34:27.390338 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:34:27.399421 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:34:27.417001 (systemd)[1586]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:34:27.560417 systemd[1586]: Queued start job for default target default.target. Jan 20 01:34:27.570675 systemd[1586]: Created slice app.slice - User Application Slice. Jan 20 01:34:27.570721 systemd[1586]: Reached target paths.target - Paths. Jan 20 01:34:27.570745 systemd[1586]: Reached target timers.target - Timers. Jan 20 01:34:27.572783 systemd[1586]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:34:27.596800 systemd[1586]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:34:27.597011 systemd[1586]: Reached target sockets.target - Sockets. Jan 20 01:34:27.597037 systemd[1586]: Reached target basic.target - Basic System. Jan 20 01:34:27.597129 systemd[1586]: Reached target default.target - Main User Target. Jan 20 01:34:27.597211 systemd[1586]: Startup finished in 170ms. Jan 20 01:34:27.597549 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:34:27.607238 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:34:27.638251 systemd-networkd[1413]: eth0: Gained IPv6LL Jan 20 01:34:27.642520 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:34:27.645114 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:34:27.652283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:34:27.657395 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:34:27.693618 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:34:28.028533 systemd[1]: Started sshd@1-10.230.15.2:22-20.161.92.111:47406.service - OpenSSH per-connection server daemon (20.161.92.111:47406). Jan 20 01:34:28.594453 sshd[1609]: Accepted publickey for core from 20.161.92.111 port 47406 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:34:28.597261 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:28.606342 systemd-logind[1481]: New session 2 of user core. Jan 20 01:34:28.621333 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:34:28.646192 systemd-networkd[1413]: eth0: Ignoring DHCPv6 address 2a02:1348:179:83c0:24:19ff:fee6:f02/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:83c0:24:19ff:fee6:f02/64 assigned by NDisc. Jan 20 01:34:28.646208 systemd-networkd[1413]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 20 01:34:28.785759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:34:28.800592 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:34:29.005567 sshd[1609]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:29.017945 systemd[1]: sshd@1-10.230.15.2:22-20.161.92.111:47406.service: Deactivated successfully. Jan 20 01:34:29.023334 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 01:34:29.025490 systemd-logind[1481]: Session 2 logged out. Waiting for processes to exit. Jan 20 01:34:29.028036 systemd-logind[1481]: Removed session 2. Jan 20 01:34:29.109537 systemd[1]: Started sshd@2-10.230.15.2:22-20.161.92.111:47416.service - OpenSSH per-connection server daemon (20.161.92.111:47416). Jan 20 01:34:29.472923 kubelet[1618]: E0120 01:34:29.472776 1618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:34:29.475356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:34:29.475592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:34:29.476402 systemd[1]: kubelet.service: Consumed 1.048s CPU time. Jan 20 01:34:29.679558 sshd[1628]: Accepted publickey for core from 20.161.92.111 port 47416 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:34:29.682233 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:29.691014 systemd-logind[1481]: New session 3 of user core. Jan 20 01:34:29.698290 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:34:30.088429 sshd[1628]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:30.092474 systemd[1]: sshd@2-10.230.15.2:22-20.161.92.111:47416.service: Deactivated successfully. Jan 20 01:34:30.094826 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 01:34:30.096903 systemd-logind[1481]: Session 3 logged out. Waiting for processes to exit. Jan 20 01:34:30.098864 systemd-logind[1481]: Removed session 3. Jan 20 01:34:31.850271 login[1579]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 20 01:34:31.856466 login[1578]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 20 01:34:31.860866 systemd-logind[1481]: New session 4 of user core. Jan 20 01:34:31.869545 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:34:31.873733 systemd-logind[1481]: New session 5 of user core. Jan 20 01:34:31.884375 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:34:31.974438 systemd[1]: Started sshd@3-10.230.15.2:22-134.209.94.87:54450.service - OpenSSH per-connection server daemon (134.209.94.87:54450). Jan 20 01:34:32.140294 sshd[1664]: Connection closed by authenticating user root 134.209.94.87 port 54450 [preauth] Jan 20 01:34:32.143571 systemd[1]: sshd@3-10.230.15.2:22-134.209.94.87:54450.service: Deactivated successfully. Jan 20 01:34:32.947399 coreos-metadata[1471]: Jan 20 01:34:32.947 WARN failed to locate config-drive, using the metadata service API instead Jan 20 01:34:33.071125 coreos-metadata[1471]: Jan 20 01:34:33.071 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 20 01:34:33.078081 coreos-metadata[1471]: Jan 20 01:34:33.078 INFO Fetch failed with 404: resource not found Jan 20 01:34:33.078081 coreos-metadata[1471]: Jan 20 01:34:33.078 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 20 01:34:33.078556 coreos-metadata[1471]: Jan 20 01:34:33.078 INFO Fetch successful Jan 20 01:34:33.078665 coreos-metadata[1471]: Jan 20 01:34:33.078 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 20 01:34:33.093474 coreos-metadata[1471]: Jan 20 01:34:33.093 INFO Fetch successful Jan 20 01:34:33.093474 coreos-metadata[1471]: Jan 20 01:34:33.093 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 20 01:34:33.109498 coreos-metadata[1471]: Jan 20 01:34:33.109 INFO Fetch successful Jan 20 01:34:33.109498 coreos-metadata[1471]: Jan 20 01:34:33.109 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 20 01:34:33.126153 coreos-metadata[1471]: Jan 20 01:34:33.126 INFO Fetch successful Jan 20 01:34:33.126153 coreos-metadata[1471]: Jan 20 01:34:33.126 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 20 01:34:33.147755 coreos-metadata[1471]: Jan 20 01:34:33.147 INFO Fetch successful Jan 20 01:34:33.175250 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 01:34:33.176483 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:34:33.562455 coreos-metadata[1538]: Jan 20 01:34:33.562 WARN failed to locate config-drive, using the metadata service API instead Jan 20 01:34:33.585606 coreos-metadata[1538]: Jan 20 01:34:33.585 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 20 01:34:33.609555 coreos-metadata[1538]: Jan 20 01:34:33.609 INFO Fetch successful Jan 20 01:34:33.609715 coreos-metadata[1538]: Jan 20 01:34:33.609 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 20 01:34:33.638376 coreos-metadata[1538]: Jan 20 01:34:33.638 INFO Fetch successful Jan 20 01:34:33.645515 unknown[1538]: wrote ssh authorized keys file for user: core Jan 20 01:34:33.693035 update-ssh-keys[1678]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:34:33.693776 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 20 01:34:33.696304 systemd[1]: Finished sshkeys.service. Jan 20 01:34:33.699243 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:34:33.701032 systemd[1]: Startup finished in 1.523s (kernel) + 14.543s (initrd) + 11.865s (userspace) = 27.932s. Jan 20 01:34:39.640009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:34:39.648212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:34:39.845221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:34:39.859413 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:34:39.956813 kubelet[1689]: E0120 01:34:39.956550 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:34:39.960670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:34:39.960929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:34:40.192302 systemd[1]: Started sshd@4-10.230.15.2:22-20.161.92.111:49162.service - OpenSSH per-connection server daemon (20.161.92.111:49162). Jan 20 01:34:40.764627 sshd[1697]: Accepted publickey for core from 20.161.92.111 port 49162 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:34:40.766902 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:40.773847 systemd-logind[1481]: New session 6 of user core. Jan 20 01:34:40.782165 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:34:41.169564 sshd[1697]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:41.174644 systemd[1]: sshd@4-10.230.15.2:22-20.161.92.111:49162.service: Deactivated successfully. Jan 20 01:34:41.176786 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:34:41.177735 systemd-logind[1481]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:34:41.179287 systemd-logind[1481]: Removed session 6. Jan 20 01:34:41.273361 systemd[1]: Started sshd@5-10.230.15.2:22-20.161.92.111:49176.service - OpenSSH per-connection server daemon (20.161.92.111:49176). Jan 20 01:34:41.833523 sshd[1704]: Accepted publickey for core from 20.161.92.111 port 49176 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:34:41.835582 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:41.841812 systemd-logind[1481]: New session 7 of user core. Jan 20 01:34:41.850149 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:34:42.231335 sshd[1704]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:42.236338 systemd[1]: sshd@5-10.230.15.2:22-20.161.92.111:49176.service: Deactivated successfully. Jan 20 01:34:42.238806 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:34:42.240136 systemd-logind[1481]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:34:42.241615 systemd-logind[1481]: Removed session 7. Jan 20 01:34:42.335324 systemd[1]: Started sshd@6-10.230.15.2:22-20.161.92.111:49178.service - OpenSSH per-connection server daemon (20.161.92.111:49178). Jan 20 01:34:42.905659 sshd[1711]: Accepted publickey for core from 20.161.92.111 port 49178 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:34:42.907866 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:42.914544 systemd-logind[1481]: New session 8 of user core. Jan 20 01:34:42.925568 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:34:43.311242 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:43.315898 systemd-logind[1481]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:34:43.316331 systemd[1]: sshd@6-10.230.15.2:22-20.161.92.111:49178.service: Deactivated successfully. Jan 20 01:34:43.318711 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:34:43.320665 systemd-logind[1481]: Removed session 8. Jan 20 01:34:43.420306 systemd[1]: Started sshd@7-10.230.15.2:22-20.161.92.111:34278.service - OpenSSH per-connection server daemon (20.161.92.111:34278). Jan 20 01:34:43.995985 sshd[1718]: Accepted publickey for core from 20.161.92.111 port 34278 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:34:43.998331 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:44.007042 systemd-logind[1481]: New session 9 of user core. Jan 20 01:34:44.017207 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:34:44.323189 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:34:44.323695 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:34:44.343592 sudo[1721]: pam_unix(sudo:session): session closed for user root Jan 20 01:34:44.434880 sshd[1718]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:44.440019 systemd[1]: sshd@7-10.230.15.2:22-20.161.92.111:34278.service: Deactivated successfully. Jan 20 01:34:44.442337 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:34:44.444432 systemd-logind[1481]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:34:44.446006 systemd-logind[1481]: Removed session 9. Jan 20 01:34:44.537297 systemd[1]: Started sshd@8-10.230.15.2:22-20.161.92.111:34294.service - OpenSSH per-connection server daemon (20.161.92.111:34294). Jan 20 01:34:45.110871 sshd[1726]: Accepted publickey for core from 20.161.92.111 port 34294 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:34:45.113682 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:45.121717 systemd-logind[1481]: New session 10 of user core. Jan 20 01:34:45.131251 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:34:45.428343 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:34:45.428859 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:34:45.434786 sudo[1730]: pam_unix(sudo:session): session closed for user root Jan 20 01:34:45.443025 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 01:34:45.443454 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:34:45.467400 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 01:34:45.470301 auditctl[1733]: No rules Jan 20 01:34:45.472554 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:34:45.473150 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 01:34:45.480508 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 01:34:45.539158 augenrules[1751]: No rules Jan 20 01:34:45.540222 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 01:34:45.541679 sudo[1729]: pam_unix(sudo:session): session closed for user root Jan 20 01:34:45.633406 sshd[1726]: pam_unix(sshd:session): session closed for user core Jan 20 01:34:45.637820 systemd-logind[1481]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:34:45.639248 systemd[1]: sshd@8-10.230.15.2:22-20.161.92.111:34294.service: Deactivated successfully. Jan 20 01:34:45.641573 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:34:45.643653 systemd-logind[1481]: Removed session 10. Jan 20 01:34:45.746358 systemd[1]: Started sshd@9-10.230.15.2:22-20.161.92.111:34310.service - OpenSSH per-connection server daemon (20.161.92.111:34310). Jan 20 01:34:46.309251 sshd[1759]: Accepted publickey for core from 20.161.92.111 port 34310 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:34:46.311670 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:34:46.319016 systemd-logind[1481]: New session 11 of user core. Jan 20 01:34:46.329272 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:34:46.626395 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:34:46.626886 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:34:47.134369 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:34:47.136756 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:34:47.629240 dockerd[1778]: time="2026-01-20T01:34:47.628433242Z" level=info msg="Starting up" Jan 20 01:34:47.774151 systemd[1]: var-lib-docker-metacopy\x2dcheck2874320265-merged.mount: Deactivated successfully. Jan 20 01:34:47.796115 dockerd[1778]: time="2026-01-20T01:34:47.796020007Z" level=info msg="Loading containers: start." Jan 20 01:34:47.956083 kernel: Initializing XFRM netlink socket Jan 20 01:34:48.073698 systemd-networkd[1413]: docker0: Link UP Jan 20 01:34:48.093137 dockerd[1778]: time="2026-01-20T01:34:48.093029747Z" level=info msg="Loading containers: done." Jan 20 01:34:48.114550 dockerd[1778]: time="2026-01-20T01:34:48.114447052Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:34:48.114803 dockerd[1778]: time="2026-01-20T01:34:48.114681332Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 01:34:48.114977 dockerd[1778]: time="2026-01-20T01:34:48.114914482Z" level=info msg="Daemon has completed initialization" Jan 20 01:34:48.155925 dockerd[1778]: time="2026-01-20T01:34:48.155809682Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:34:48.156624 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:34:49.512927 containerd[1504]: time="2026-01-20T01:34:49.512769129Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 01:34:50.140094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:34:50.150257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:34:50.351097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:34:50.361416 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:34:50.436101 kubelet[1930]: E0120 01:34:50.434301 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:34:50.442770 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:34:50.444246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:34:50.478670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167467558.mount: Deactivated successfully. Jan 20 01:34:52.435400 containerd[1504]: time="2026-01-20T01:34:52.435171067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:52.438156 containerd[1504]: time="2026-01-20T01:34:52.438066266Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068081" Jan 20 01:34:52.439146 containerd[1504]: time="2026-01-20T01:34:52.439094532Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:52.445992 containerd[1504]: time="2026-01-20T01:34:52.444276725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:52.448871 containerd[1504]: time="2026-01-20T01:34:52.448804917Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.935891394s" Jan 20 01:34:52.449062 containerd[1504]: time="2026-01-20T01:34:52.449031465Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 20 01:34:52.451861 containerd[1504]: time="2026-01-20T01:34:52.451778347Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 01:34:54.664631 containerd[1504]: time="2026-01-20T01:34:54.662842389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:54.664631 containerd[1504]: time="2026-01-20T01:34:54.664521672Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162448" Jan 20 01:34:54.666394 containerd[1504]: time="2026-01-20T01:34:54.666356867Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:54.672130 containerd[1504]: time="2026-01-20T01:34:54.672082566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:54.674112 containerd[1504]: time="2026-01-20T01:34:54.674067719Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.22188056s" Jan 20 01:34:54.674337 containerd[1504]: time="2026-01-20T01:34:54.674287147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 20 01:34:54.676323 containerd[1504]: time="2026-01-20T01:34:54.676273619Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 01:34:56.192991 containerd[1504]: time="2026-01-20T01:34:56.191339581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:56.194224 containerd[1504]: time="2026-01-20T01:34:56.194149842Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725935" Jan 20 01:34:56.194905 containerd[1504]: time="2026-01-20T01:34:56.194376102Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:56.202956 containerd[1504]: time="2026-01-20T01:34:56.202210548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:56.203437 containerd[1504]: time="2026-01-20T01:34:56.203398549Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.527062706s" Jan 20 01:34:56.203596 containerd[1504]: time="2026-01-20T01:34:56.203566874Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 20 01:34:56.204525 containerd[1504]: time="2026-01-20T01:34:56.204463901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 01:34:57.654082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477154375.mount: Deactivated successfully. Jan 20 01:34:58.169555 containerd[1504]: time="2026-01-20T01:34:58.169482472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:58.170783 containerd[1504]: time="2026-01-20T01:34:58.170623900Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965301" Jan 20 01:34:58.172148 containerd[1504]: time="2026-01-20T01:34:58.171630512Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:58.174782 containerd[1504]: time="2026-01-20T01:34:58.174735410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:34:58.175773 containerd[1504]: time="2026-01-20T01:34:58.175710172Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.971065302s" Jan 20 01:34:58.175916 containerd[1504]: time="2026-01-20T01:34:58.175886236Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 20 01:34:58.177694 containerd[1504]: time="2026-01-20T01:34:58.177653347Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 01:34:58.689977 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 20 01:34:58.742481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1162557515.mount: Deactivated successfully. Jan 20 01:35:00.054729 systemd[1]: Started sshd@10-10.230.15.2:22-134.209.94.87:55158.service - OpenSSH per-connection server daemon (134.209.94.87:55158). Jan 20 01:35:00.184518 sshd[2068]: Connection closed by authenticating user root 134.209.94.87 port 55158 [preauth] Jan 20 01:35:00.189119 systemd[1]: sshd@10-10.230.15.2:22-134.209.94.87:55158.service: Deactivated successfully. Jan 20 01:35:00.640402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:35:00.659176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:35:00.971616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:35:00.976541 containerd[1504]: time="2026-01-20T01:35:00.975864400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:00.978405 containerd[1504]: time="2026-01-20T01:35:00.977965521Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Jan 20 01:35:00.982091 containerd[1504]: time="2026-01-20T01:35:00.981072940Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:00.981521 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:35:00.986173 containerd[1504]: time="2026-01-20T01:35:00.986005459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:00.990880 containerd[1504]: time="2026-01-20T01:35:00.989094108Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.811384492s" Jan 20 01:35:00.990880 containerd[1504]: time="2026-01-20T01:35:00.989155378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 20 01:35:00.992699 containerd[1504]: time="2026-01-20T01:35:00.992328766Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 01:35:01.061300 kubelet[2080]: E0120 01:35:01.061211 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:35:01.065262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:35:01.065532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:35:01.515714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564930359.mount: Deactivated successfully. Jan 20 01:35:01.525820 containerd[1504]: time="2026-01-20T01:35:01.525714678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:01.528274 containerd[1504]: time="2026-01-20T01:35:01.528145628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Jan 20 01:35:01.531716 containerd[1504]: time="2026-01-20T01:35:01.530036859Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:01.535448 containerd[1504]: time="2026-01-20T01:35:01.534309169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:01.535448 containerd[1504]: time="2026-01-20T01:35:01.535112661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 542.736639ms" Jan 20 01:35:01.535448 containerd[1504]: time="2026-01-20T01:35:01.535179506Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 20 01:35:01.536856 containerd[1504]: time="2026-01-20T01:35:01.536794339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 01:35:02.142339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993691546.mount: Deactivated successfully. Jan 20 01:35:06.195795 containerd[1504]: time="2026-01-20T01:35:06.195629407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:06.198927 containerd[1504]: time="2026-01-20T01:35:06.198456775Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166822" Jan 20 01:35:06.198927 containerd[1504]: time="2026-01-20T01:35:06.198844856Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:06.203997 containerd[1504]: time="2026-01-20T01:35:06.203330941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:06.205965 containerd[1504]: time="2026-01-20T01:35:06.205238648Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.668380205s" Jan 20 01:35:06.205965 containerd[1504]: time="2026-01-20T01:35:06.205401315Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 20 01:35:06.530739 systemd[1]: Started sshd@11-10.230.15.2:22-152.42.141.173:42122.service - OpenSSH per-connection server daemon (152.42.141.173:42122). Jan 20 01:35:07.977712 sshd[2152]: Connection closed by authenticating user root 152.42.141.173 port 42122 [preauth] Jan 20 01:35:07.981558 systemd[1]: sshd@11-10.230.15.2:22-152.42.141.173:42122.service: Deactivated successfully. Jan 20 01:35:10.374837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:35:10.386414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:35:10.430058 systemd[1]: Reloading requested from client PID 2176 ('systemctl') (unit session-11.scope)... Jan 20 01:35:10.430101 systemd[1]: Reloading... Jan 20 01:35:10.609991 zram_generator::config[2214]: No configuration found. Jan 20 01:35:10.771526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:35:10.879771 systemd[1]: Reloading finished in 449 ms. Jan 20 01:35:10.954426 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:35:10.954580 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:35:10.954986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:35:10.962421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:35:11.116030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:35:11.130795 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:35:11.262126 kubelet[2282]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:35:11.263960 kubelet[2282]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:35:11.263960 kubelet[2282]: I0120 01:35:11.262852 2282 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:35:11.784979 update_engine[1486]: I20260120 01:35:11.783118 1486 update_attempter.cc:509] Updating boot flags... Jan 20 01:35:11.847018 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2295) Jan 20 01:35:11.961971 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2297) Jan 20 01:35:12.264532 kubelet[2282]: I0120 01:35:12.264463 2282 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 01:35:12.265420 kubelet[2282]: I0120 01:35:12.265397 2282 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:35:12.269111 kubelet[2282]: I0120 01:35:12.269085 2282 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 01:35:12.270018 kubelet[2282]: I0120 01:35:12.269229 2282 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:35:12.270018 kubelet[2282]: I0120 01:35:12.269667 2282 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:35:12.292209 kubelet[2282]: I0120 01:35:12.292157 2282 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:35:12.298061 kubelet[2282]: E0120 01:35:12.297971 2282 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.15.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.15.2:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:35:12.307539 kubelet[2282]: E0120 01:35:12.307441 2282 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 01:35:12.307676 kubelet[2282]: I0120 01:35:12.307571 2282 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 20 01:35:12.316504 kubelet[2282]: I0120 01:35:12.316474 2282 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 01:35:12.320457 kubelet[2282]: I0120 01:35:12.320392 2282 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:35:12.322041 kubelet[2282]: I0120 01:35:12.320449 2282 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-nmle2.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:35:12.322041 kubelet[2282]: I0120 01:35:12.322036 2282 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:35:12.322404 kubelet[2282]: I0120 01:35:12.322056 2282 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 01:35:12.322404 kubelet[2282]: I0120 01:35:12.322257 2282 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 01:35:12.329058 kubelet[2282]: I0120 01:35:12.329030 2282 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:35:12.330767 kubelet[2282]: I0120 01:35:12.330721 2282 kubelet.go:475] "Attempting to sync node with API server" Jan 20 01:35:12.330767 kubelet[2282]: I0120 01:35:12.330753 2282 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:35:12.331093 kubelet[2282]: I0120 01:35:12.330804 2282 kubelet.go:387] "Adding apiserver pod source" Jan 20 01:35:12.333588 kubelet[2282]: I0120 01:35:12.332982 2282 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:35:12.335622 kubelet[2282]: E0120 01:35:12.335581 2282 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.15.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-nmle2.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.15.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:35:12.336229 kubelet[2282]: E0120 01:35:12.335881 2282 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.15.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.15.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:35:12.336734 kubelet[2282]: I0120 01:35:12.336707 2282 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 01:35:12.338952 kubelet[2282]: I0120 01:35:12.338903 2282 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:35:12.339107 kubelet[2282]: I0120 01:35:12.339086 2282 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 01:35:12.343612 kubelet[2282]: W0120 01:35:12.343585 2282 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:35:12.350583 kubelet[2282]: I0120 01:35:12.350556 2282 server.go:1262] "Started kubelet" Jan 20 01:35:12.352067 kubelet[2282]: I0120 01:35:12.352044 2282 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:35:12.361224 kubelet[2282]: E0120 01:35:12.359279 2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.15.2:6443/api/v1/namespaces/default/events\": dial tcp 10.230.15.2:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-nmle2.gb1.brightbox.com.188c4c8ab9764c81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-nmle2.gb1.brightbox.com,UID:srv-nmle2.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-nmle2.gb1.brightbox.com,},FirstTimestamp:2026-01-20 01:35:12.350497921 +0000 UTC m=+1.211825694,LastTimestamp:2026-01-20 01:35:12.350497921 +0000 UTC m=+1.211825694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-nmle2.gb1.brightbox.com,}" Jan 20 01:35:12.365706 kubelet[2282]: I0120 01:35:12.365632 2282 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:35:12.369656 kubelet[2282]: I0120 01:35:12.369626 2282 server.go:310] "Adding debug handlers to kubelet server" Jan 20 01:35:12.379116 kubelet[2282]: I0120 01:35:12.379074 2282 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:35:12.381186 kubelet[2282]: I0120 01:35:12.381135 2282 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:35:12.381271 kubelet[2282]: I0120 01:35:12.381229 2282 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 01:35:12.381614 kubelet[2282]: I0120 01:35:12.381586 2282 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:35:12.385183 kubelet[2282]: I0120 01:35:12.382702 2282 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 01:35:12.385183 kubelet[2282]: E0120 01:35:12.383051 2282 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-nmle2.gb1.brightbox.com\" not found" Jan 20 01:35:12.388110 kubelet[2282]: I0120 01:35:12.388069 2282 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 01:35:12.388195 kubelet[2282]: I0120 01:35:12.388159 2282 reconciler.go:29] "Reconciler: start to sync state" Jan 20 01:35:12.389684 kubelet[2282]: E0120 01:35:12.389164 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nmle2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.2:6443: connect: connection refused" interval="200ms" Jan 20 01:35:12.390340 kubelet[2282]: E0120 01:35:12.390278 2282 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.15.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.15.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:35:12.390991 kubelet[2282]: I0120 01:35:12.390821 2282 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:35:12.391133 kubelet[2282]: I0120 01:35:12.391105 2282 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:35:12.393218 kubelet[2282]: I0120 01:35:12.393042 2282 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 01:35:12.393985 kubelet[2282]: I0120 01:35:12.393964 2282 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:35:12.394545 kubelet[2282]: I0120 01:35:12.394517 2282 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 01:35:12.394629 kubelet[2282]: I0120 01:35:12.394554 2282 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 01:35:12.394629 kubelet[2282]: I0120 01:35:12.394599 2282 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 01:35:12.394748 kubelet[2282]: E0120 01:35:12.394696 2282 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:35:12.404865 kubelet[2282]: E0120 01:35:12.404783 2282 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.15.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.15.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:35:12.420958 kubelet[2282]: E0120 01:35:12.418703 2282 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:35:12.433242 kubelet[2282]: I0120 01:35:12.433212 2282 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:35:12.433830 kubelet[2282]: I0120 01:35:12.433808 2282 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:35:12.434003 kubelet[2282]: I0120 01:35:12.433978 2282 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:35:12.437024 kubelet[2282]: I0120 01:35:12.437000 2282 policy_none.go:49] "None policy: Start" Jan 20 01:35:12.437176 kubelet[2282]: I0120 01:35:12.437153 2282 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 01:35:12.437301 kubelet[2282]: I0120 01:35:12.437281 2282 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 01:35:12.438656 kubelet[2282]: I0120 01:35:12.438635 2282 policy_none.go:47] "Start" Jan 20 01:35:12.446105 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:35:12.462260 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:35:12.467440 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:35:12.477513 kubelet[2282]: E0120 01:35:12.477219 2282 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:35:12.478675 kubelet[2282]: I0120 01:35:12.478156 2282 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:35:12.478675 kubelet[2282]: I0120 01:35:12.478186 2282 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:35:12.478675 kubelet[2282]: I0120 01:35:12.478591 2282 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:35:12.481484 kubelet[2282]: E0120 01:35:12.481370 2282 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:35:12.481484 kubelet[2282]: E0120 01:35:12.481443 2282 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-nmle2.gb1.brightbox.com\" not found" Jan 20 01:35:12.516480 systemd[1]: Created slice kubepods-burstable-pod4aed5d3eccdf848b10d5dd043a925a96.slice - libcontainer container kubepods-burstable-pod4aed5d3eccdf848b10d5dd043a925a96.slice. Jan 20 01:35:12.537625 kubelet[2282]: E0120 01:35:12.537560 2282 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nmle2.gb1.brightbox.com\" not found" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.543066 systemd[1]: Created slice kubepods-burstable-pod674c2c1601b0d831409fbc9c81a379c0.slice - libcontainer container kubepods-burstable-pod674c2c1601b0d831409fbc9c81a379c0.slice. Jan 20 01:35:12.554767 kubelet[2282]: E0120 01:35:12.554719 2282 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nmle2.gb1.brightbox.com\" not found" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.560790 systemd[1]: Created slice kubepods-burstable-pod4a08b046aca95c19058fdea864e8531b.slice - libcontainer container kubepods-burstable-pod4a08b046aca95c19058fdea864e8531b.slice. Jan 20 01:35:12.563229 kubelet[2282]: E0120 01:35:12.562965 2282 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nmle2.gb1.brightbox.com\" not found" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.581436 kubelet[2282]: I0120 01:35:12.581391 2282 kubelet_node_status.go:75] "Attempting to register node" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.582643 kubelet[2282]: E0120 01:35:12.582609 2282 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.2:6443/api/v1/nodes\": dial tcp 10.230.15.2:6443: connect: connection refused" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.590422 kubelet[2282]: E0120 01:35:12.590369 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nmle2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.2:6443: connect: connection refused" interval="400ms" Jan 20 01:35:12.689244 kubelet[2282]: I0120 01:35:12.689161 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4aed5d3eccdf848b10d5dd043a925a96-ca-certs\") pod \"kube-apiserver-srv-nmle2.gb1.brightbox.com\" (UID: \"4aed5d3eccdf848b10d5dd043a925a96\") " pod="kube-system/kube-apiserver-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.689244 kubelet[2282]: I0120 01:35:12.689248 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4aed5d3eccdf848b10d5dd043a925a96-k8s-certs\") pod \"kube-apiserver-srv-nmle2.gb1.brightbox.com\" (UID: \"4aed5d3eccdf848b10d5dd043a925a96\") " pod="kube-system/kube-apiserver-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.689679 kubelet[2282]: I0120 01:35:12.689281 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4aed5d3eccdf848b10d5dd043a925a96-usr-share-ca-certificates\") pod \"kube-apiserver-srv-nmle2.gb1.brightbox.com\" (UID: \"4aed5d3eccdf848b10d5dd043a925a96\") " pod="kube-system/kube-apiserver-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.689679 kubelet[2282]: I0120 01:35:12.689326 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/674c2c1601b0d831409fbc9c81a379c0-k8s-certs\") pod \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" (UID: \"674c2c1601b0d831409fbc9c81a379c0\") " pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.689679 kubelet[2282]: I0120 01:35:12.689357 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/674c2c1601b0d831409fbc9c81a379c0-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" (UID: \"674c2c1601b0d831409fbc9c81a379c0\") " pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.689679 kubelet[2282]: I0120 01:35:12.689382 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/674c2c1601b0d831409fbc9c81a379c0-ca-certs\") pod \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" (UID: \"674c2c1601b0d831409fbc9c81a379c0\") " pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.689679 kubelet[2282]: I0120 01:35:12.689410 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/674c2c1601b0d831409fbc9c81a379c0-flexvolume-dir\") pod \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" (UID: \"674c2c1601b0d831409fbc9c81a379c0\") " pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.689999 kubelet[2282]: I0120 01:35:12.689436 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/674c2c1601b0d831409fbc9c81a379c0-kubeconfig\") pod \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" (UID: \"674c2c1601b0d831409fbc9c81a379c0\") " pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.689999 kubelet[2282]: I0120 01:35:12.689463 2282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a08b046aca95c19058fdea864e8531b-kubeconfig\") pod \"kube-scheduler-srv-nmle2.gb1.brightbox.com\" (UID: \"4a08b046aca95c19058fdea864e8531b\") " pod="kube-system/kube-scheduler-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.786062 kubelet[2282]: I0120 01:35:12.785775 2282 kubelet_node_status.go:75] "Attempting to register node" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.787061 kubelet[2282]: E0120 01:35:12.787021 2282 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.2:6443/api/v1/nodes\": dial tcp 10.230.15.2:6443: connect: connection refused" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:12.844377 containerd[1504]: time="2026-01-20T01:35:12.844236462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-nmle2.gb1.brightbox.com,Uid:4aed5d3eccdf848b10d5dd043a925a96,Namespace:kube-system,Attempt:0,}" Jan 20 01:35:12.862128 containerd[1504]: time="2026-01-20T01:35:12.862066792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-nmle2.gb1.brightbox.com,Uid:674c2c1601b0d831409fbc9c81a379c0,Namespace:kube-system,Attempt:0,}" Jan 20 01:35:12.866281 containerd[1504]: time="2026-01-20T01:35:12.865960522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-nmle2.gb1.brightbox.com,Uid:4a08b046aca95c19058fdea864e8531b,Namespace:kube-system,Attempt:0,}" Jan 20 01:35:12.991410 kubelet[2282]: E0120 01:35:12.991347 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nmle2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.2:6443: connect: connection refused" interval="800ms" Jan 20 01:35:13.193372 kubelet[2282]: I0120 01:35:13.192589 2282 kubelet_node_status.go:75] "Attempting to register node" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:13.194388 kubelet[2282]: E0120 01:35:13.194353 2282 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.2:6443/api/v1/nodes\": dial tcp 10.230.15.2:6443: connect: connection refused" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:13.408577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256210148.mount: Deactivated successfully. Jan 20 01:35:13.418542 containerd[1504]: time="2026-01-20T01:35:13.418487061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:35:13.419769 containerd[1504]: time="2026-01-20T01:35:13.419712920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 20 01:35:13.420801 containerd[1504]: time="2026-01-20T01:35:13.420762994Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:35:13.422436 containerd[1504]: time="2026-01-20T01:35:13.422398348Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:35:13.423686 containerd[1504]: time="2026-01-20T01:35:13.423394538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:35:13.423686 containerd[1504]: time="2026-01-20T01:35:13.423521035Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 01:35:13.425327 containerd[1504]: time="2026-01-20T01:35:13.424763272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 01:35:13.427745 containerd[1504]: time="2026-01-20T01:35:13.427692017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:35:13.430213 containerd[1504]: time="2026-01-20T01:35:13.430170779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 585.673539ms" Jan 20 01:35:13.433780 containerd[1504]: time="2026-01-20T01:35:13.433738595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 571.572546ms" Jan 20 01:35:13.434739 containerd[1504]: time="2026-01-20T01:35:13.434647929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 568.618215ms" Jan 20 01:35:13.605431 containerd[1504]: time="2026-01-20T01:35:13.602236287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:35:13.605431 containerd[1504]: time="2026-01-20T01:35:13.602696481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:35:13.605431 containerd[1504]: time="2026-01-20T01:35:13.603438697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:13.605431 containerd[1504]: time="2026-01-20T01:35:13.603582100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:13.631091 containerd[1504]: time="2026-01-20T01:35:13.630908489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:35:13.631413 containerd[1504]: time="2026-01-20T01:35:13.631136202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:35:13.631413 containerd[1504]: time="2026-01-20T01:35:13.631243466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:13.631655 containerd[1504]: time="2026-01-20T01:35:13.631519151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:13.638036 containerd[1504]: time="2026-01-20T01:35:13.636631093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:35:13.638036 containerd[1504]: time="2026-01-20T01:35:13.637531840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:35:13.638398 containerd[1504]: time="2026-01-20T01:35:13.638277175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:13.644957 containerd[1504]: time="2026-01-20T01:35:13.640101244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:13.654208 systemd[1]: Started cri-containerd-af04a01274973bd29981bbc9451e4827f9a861cd562c30e39c911ec5f92fad34.scope - libcontainer container af04a01274973bd29981bbc9451e4827f9a861cd562c30e39c911ec5f92fad34. Jan 20 01:35:13.697189 systemd[1]: Started cri-containerd-4a3b2fe96309a36e6809f31ad1086e9a9a56f04f5104ee04672afc5ed052ce70.scope - libcontainer container 4a3b2fe96309a36e6809f31ad1086e9a9a56f04f5104ee04672afc5ed052ce70. Jan 20 01:35:13.715195 systemd[1]: Started cri-containerd-96381b29d2548d9cbb576afe9c1fbbffdb883d8129dab35e646739f88be86ade.scope - libcontainer container 96381b29d2548d9cbb576afe9c1fbbffdb883d8129dab35e646739f88be86ade. Jan 20 01:35:13.736675 kubelet[2282]: E0120 01:35:13.733497 2282 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.15.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.15.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:35:13.757871 kubelet[2282]: E0120 01:35:13.757791 2282 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.15.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-nmle2.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.15.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:35:13.792796 kubelet[2282]: E0120 01:35:13.792626 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nmle2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.2:6443: connect: connection refused" interval="1.6s" Jan 20 01:35:13.818832 containerd[1504]: time="2026-01-20T01:35:13.817357529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-nmle2.gb1.brightbox.com,Uid:4a08b046aca95c19058fdea864e8531b,Namespace:kube-system,Attempt:0,} returns sandbox id \"af04a01274973bd29981bbc9451e4827f9a861cd562c30e39c911ec5f92fad34\"" Jan 20 01:35:13.829025 kubelet[2282]: E0120 01:35:13.828910 2282 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.15.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.15.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:35:13.836883 containerd[1504]: time="2026-01-20T01:35:13.836593054Z" level=info msg="CreateContainer within sandbox \"af04a01274973bd29981bbc9451e4827f9a861cd562c30e39c911ec5f92fad34\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:35:13.844389 containerd[1504]: time="2026-01-20T01:35:13.844341528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-nmle2.gb1.brightbox.com,Uid:674c2c1601b0d831409fbc9c81a379c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a3b2fe96309a36e6809f31ad1086e9a9a56f04f5104ee04672afc5ed052ce70\"" Jan 20 01:35:13.850904 containerd[1504]: time="2026-01-20T01:35:13.850862335Z" level=info msg="CreateContainer within sandbox \"4a3b2fe96309a36e6809f31ad1086e9a9a56f04f5104ee04672afc5ed052ce70\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:35:13.859387 containerd[1504]: time="2026-01-20T01:35:13.859093941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-nmle2.gb1.brightbox.com,Uid:4aed5d3eccdf848b10d5dd043a925a96,Namespace:kube-system,Attempt:0,} returns sandbox id \"96381b29d2548d9cbb576afe9c1fbbffdb883d8129dab35e646739f88be86ade\"" Jan 20 01:35:13.868471 containerd[1504]: time="2026-01-20T01:35:13.868403192Z" level=info msg="CreateContainer within sandbox \"96381b29d2548d9cbb576afe9c1fbbffdb883d8129dab35e646739f88be86ade\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:35:13.872681 containerd[1504]: time="2026-01-20T01:35:13.872632393Z" level=info msg="CreateContainer within sandbox \"af04a01274973bd29981bbc9451e4827f9a861cd562c30e39c911ec5f92fad34\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5efd856fec749d0d2c7ab2160a4040da3654884c11d8654970b045eef73d2d88\"" Jan 20 01:35:13.873955 containerd[1504]: time="2026-01-20T01:35:13.873836271Z" level=info msg="CreateContainer within sandbox \"4a3b2fe96309a36e6809f31ad1086e9a9a56f04f5104ee04672afc5ed052ce70\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5e6d9d86a48cabe19dd7ef676eb75ac3ab63399e12e6dcb0b91c761412e2c4ec\"" Jan 20 01:35:13.875072 containerd[1504]: time="2026-01-20T01:35:13.875031263Z" level=info msg="StartContainer for \"5e6d9d86a48cabe19dd7ef676eb75ac3ab63399e12e6dcb0b91c761412e2c4ec\"" Jan 20 01:35:13.876606 containerd[1504]: time="2026-01-20T01:35:13.875255600Z" level=info msg="StartContainer for \"5efd856fec749d0d2c7ab2160a4040da3654884c11d8654970b045eef73d2d88\"" Jan 20 01:35:13.897478 containerd[1504]: time="2026-01-20T01:35:13.897376463Z" level=info msg="CreateContainer within sandbox \"96381b29d2548d9cbb576afe9c1fbbffdb883d8129dab35e646739f88be86ade\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ed722af383a53219d9cd85ff7f4912f7eda4dbae94518ba0b1b54ad432ac1811\"" Jan 20 01:35:13.899707 containerd[1504]: time="2026-01-20T01:35:13.899545132Z" level=info msg="StartContainer for \"ed722af383a53219d9cd85ff7f4912f7eda4dbae94518ba0b1b54ad432ac1811\"" Jan 20 01:35:13.939334 systemd[1]: Started cri-containerd-5e6d9d86a48cabe19dd7ef676eb75ac3ab63399e12e6dcb0b91c761412e2c4ec.scope - libcontainer container 5e6d9d86a48cabe19dd7ef676eb75ac3ab63399e12e6dcb0b91c761412e2c4ec. Jan 20 01:35:13.948380 systemd[1]: Started cri-containerd-5efd856fec749d0d2c7ab2160a4040da3654884c11d8654970b045eef73d2d88.scope - libcontainer container 5efd856fec749d0d2c7ab2160a4040da3654884c11d8654970b045eef73d2d88. Jan 20 01:35:13.968600 kubelet[2282]: E0120 01:35:13.968073 2282 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.15.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.15.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:35:13.973169 systemd[1]: Started cri-containerd-ed722af383a53219d9cd85ff7f4912f7eda4dbae94518ba0b1b54ad432ac1811.scope - libcontainer container ed722af383a53219d9cd85ff7f4912f7eda4dbae94518ba0b1b54ad432ac1811. Jan 20 01:35:14.000811 kubelet[2282]: I0120 01:35:14.000752 2282 kubelet_node_status.go:75] "Attempting to register node" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:14.002193 kubelet[2282]: E0120 01:35:14.002136 2282 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.2:6443/api/v1/nodes\": dial tcp 10.230.15.2:6443: connect: connection refused" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:14.060363 containerd[1504]: time="2026-01-20T01:35:14.059782910Z" level=info msg="StartContainer for \"5e6d9d86a48cabe19dd7ef676eb75ac3ab63399e12e6dcb0b91c761412e2c4ec\" returns successfully" Jan 20 01:35:14.077193 containerd[1504]: time="2026-01-20T01:35:14.077073636Z" level=info msg="StartContainer for \"ed722af383a53219d9cd85ff7f4912f7eda4dbae94518ba0b1b54ad432ac1811\" returns successfully" Jan 20 01:35:14.098189 containerd[1504]: time="2026-01-20T01:35:14.096909561Z" level=info msg="StartContainer for \"5efd856fec749d0d2c7ab2160a4040da3654884c11d8654970b045eef73d2d88\" returns successfully" Jan 20 01:35:14.422228 kubelet[2282]: E0120 01:35:14.422038 2282 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.15.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.15.2:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:35:14.441318 kubelet[2282]: E0120 01:35:14.440768 2282 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nmle2.gb1.brightbox.com\" not found" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:14.443899 kubelet[2282]: E0120 01:35:14.441779 2282 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nmle2.gb1.brightbox.com\" not found" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:14.447344 kubelet[2282]: E0120 01:35:14.447014 2282 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nmle2.gb1.brightbox.com\" not found" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:15.451164 kubelet[2282]: E0120 01:35:15.450530 2282 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nmle2.gb1.brightbox.com\" not found" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:15.453082 kubelet[2282]: E0120 01:35:15.452660 2282 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nmle2.gb1.brightbox.com\" not found" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:15.607992 kubelet[2282]: I0120 01:35:15.607701 2282 kubelet_node_status.go:75] "Attempting to register node" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:17.254471 kubelet[2282]: E0120 01:35:17.254373 2282 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-nmle2.gb1.brightbox.com\" not found" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:17.317211 kubelet[2282]: I0120 01:35:17.317141 2282 kubelet_node_status.go:78] "Successfully registered node" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:17.337897 kubelet[2282]: I0120 01:35:17.337835 2282 apiserver.go:52] "Watching apiserver" Jan 20 01:35:17.349300 kubelet[2282]: E0120 01:35:17.349109 2282 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-nmle2.gb1.brightbox.com.188c4c8ab9764c81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-nmle2.gb1.brightbox.com,UID:srv-nmle2.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-nmle2.gb1.brightbox.com,},FirstTimestamp:2026-01-20 01:35:12.350497921 +0000 UTC m=+1.211825694,LastTimestamp:2026-01-20 01:35:12.350497921 +0000 UTC m=+1.211825694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-nmle2.gb1.brightbox.com,}" Jan 20 01:35:17.387213 kubelet[2282]: I0120 01:35:17.387151 2282 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:17.388386 kubelet[2282]: I0120 01:35:17.388307 2282 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 01:35:17.408335 kubelet[2282]: E0120 01:35:17.408238 2282 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:17.408335 kubelet[2282]: I0120 01:35:17.408338 2282 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:17.420328 kubelet[2282]: E0120 01:35:17.419909 2282 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-nmle2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:17.420328 kubelet[2282]: I0120 01:35:17.419985 2282 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:17.428377 kubelet[2282]: E0120 01:35:17.428297 2282 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-nmle2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:18.409031 kubelet[2282]: I0120 01:35:18.408197 2282 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:18.421848 kubelet[2282]: I0120 01:35:18.421017 2282 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:35:19.201812 systemd[1]: Reloading requested from client PID 2580 ('systemctl') (unit session-11.scope)... Jan 20 01:35:19.201859 systemd[1]: Reloading... Jan 20 01:35:19.333002 zram_generator::config[2628]: No configuration found. Jan 20 01:35:19.510457 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:35:19.626049 kubelet[2282]: I0120 01:35:19.625189 2282 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:19.636766 kubelet[2282]: I0120 01:35:19.636272 2282 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:35:19.649752 systemd[1]: Reloading finished in 447 ms. Jan 20 01:35:19.709664 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:35:19.715470 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:35:19.715871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:35:19.716686 systemd[1]: kubelet.service: Consumed 1.647s CPU time, 127.2M memory peak, 0B memory swap peak. Jan 20 01:35:19.732499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:35:19.981053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:35:20.001558 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:35:20.103562 kubelet[2683]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:35:20.103562 kubelet[2683]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:35:20.104594 kubelet[2683]: I0120 01:35:20.104516 2683 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:35:20.114916 kubelet[2683]: I0120 01:35:20.114872 2683 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 01:35:20.114916 kubelet[2683]: I0120 01:35:20.114904 2683 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:35:20.118238 kubelet[2683]: I0120 01:35:20.118201 2683 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 01:35:20.118238 kubelet[2683]: I0120 01:35:20.118236 2683 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:35:20.118550 kubelet[2683]: I0120 01:35:20.118518 2683 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:35:20.120601 kubelet[2683]: I0120 01:35:20.120512 2683 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 01:35:20.127004 kubelet[2683]: I0120 01:35:20.126727 2683 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:35:20.132182 kubelet[2683]: E0120 01:35:20.132141 2683 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 01:35:20.132320 kubelet[2683]: I0120 01:35:20.132202 2683 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 20 01:35:20.138502 kubelet[2683]: I0120 01:35:20.138473 2683 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 01:35:20.139953 kubelet[2683]: I0120 01:35:20.139041 2683 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:35:20.139953 kubelet[2683]: I0120 01:35:20.139087 2683 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-nmle2.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:35:20.139953 kubelet[2683]: I0120 01:35:20.139375 2683 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:35:20.139953 kubelet[2683]: I0120 01:35:20.139392 2683 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 01:35:20.140451 kubelet[2683]: I0120 01:35:20.139438 2683 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 01:35:20.141789 kubelet[2683]: I0120 01:35:20.141761 2683 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:35:20.142369 kubelet[2683]: I0120 01:35:20.142347 2683 kubelet.go:475] "Attempting to sync node with API server" Jan 20 01:35:20.142516 kubelet[2683]: I0120 01:35:20.142495 2683 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:35:20.142665 kubelet[2683]: I0120 01:35:20.142645 2683 kubelet.go:387] "Adding apiserver pod source" Jan 20 01:35:20.142782 kubelet[2683]: I0120 01:35:20.142763 2683 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:35:20.148200 kubelet[2683]: I0120 01:35:20.148120 2683 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 01:35:20.150822 kubelet[2683]: I0120 01:35:20.150253 2683 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:35:20.151209 kubelet[2683]: I0120 01:35:20.151106 2683 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 01:35:20.162969 kubelet[2683]: I0120 01:35:20.162200 2683 server.go:1262] "Started kubelet" Jan 20 01:35:20.172980 kubelet[2683]: I0120 01:35:20.172323 2683 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:35:20.179966 kubelet[2683]: I0120 01:35:20.175177 2683 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:35:20.179966 kubelet[2683]: I0120 01:35:20.175332 2683 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 01:35:20.179966 kubelet[2683]: I0120 01:35:20.175654 2683 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:35:20.182102 kubelet[2683]: I0120 01:35:20.182059 2683 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:35:20.195972 kubelet[2683]: I0120 01:35:20.194361 2683 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:35:20.207054 kubelet[2683]: I0120 01:35:20.205070 2683 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 01:35:20.208474 kubelet[2683]: I0120 01:35:20.208413 2683 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 01:35:20.209975 kubelet[2683]: I0120 01:35:20.208648 2683 reconciler.go:29] "Reconciler: start to sync state" Jan 20 01:35:20.210646 kubelet[2683]: I0120 01:35:20.210618 2683 server.go:310] "Adding debug handlers to kubelet server" Jan 20 01:35:20.231467 kubelet[2683]: I0120 01:35:20.231029 2683 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:35:20.231467 kubelet[2683]: I0120 01:35:20.231278 2683 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:35:20.236659 kubelet[2683]: E0120 01:35:20.235723 2683 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:35:20.237437 kubelet[2683]: I0120 01:35:20.237404 2683 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:35:20.259277 kubelet[2683]: I0120 01:35:20.259130 2683 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 01:35:20.272266 kubelet[2683]: I0120 01:35:20.272101 2683 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 01:35:20.273093 kubelet[2683]: I0120 01:35:20.272847 2683 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 01:35:20.274053 kubelet[2683]: I0120 01:35:20.273977 2683 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 01:35:20.274797 kubelet[2683]: E0120 01:35:20.274401 2683 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:35:20.370998 kubelet[2683]: I0120 01:35:20.369613 2683 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:35:20.370998 kubelet[2683]: I0120 01:35:20.369792 2683 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:35:20.370998 kubelet[2683]: I0120 01:35:20.369881 2683 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:35:20.370998 kubelet[2683]: I0120 01:35:20.370301 2683 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:35:20.370998 kubelet[2683]: I0120 01:35:20.370331 2683 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:35:20.370998 kubelet[2683]: I0120 01:35:20.370370 2683 policy_none.go:49] "None policy: Start" Jan 20 01:35:20.370998 kubelet[2683]: I0120 01:35:20.370404 2683 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 01:35:20.370998 kubelet[2683]: I0120 01:35:20.370434 2683 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 01:35:20.370998 kubelet[2683]: I0120 01:35:20.370614 2683 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 01:35:20.370998 kubelet[2683]: I0120 01:35:20.370644 2683 policy_none.go:47] "Start" Jan 20 01:35:20.375257 kubelet[2683]: E0120 01:35:20.374840 2683 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:35:20.392963 kubelet[2683]: E0120 01:35:20.390570 2683 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:35:20.392963 kubelet[2683]: I0120 01:35:20.390991 2683 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:35:20.392963 kubelet[2683]: I0120 01:35:20.391040 2683 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:35:20.398792 kubelet[2683]: I0120 01:35:20.395577 2683 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:35:20.413254 kubelet[2683]: E0120 01:35:20.413075 2683 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:35:20.520993 kubelet[2683]: I0120 01:35:20.520539 2683 kubelet_node_status.go:75] "Attempting to register node" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.536654 kubelet[2683]: I0120 01:35:20.536602 2683 kubelet_node_status.go:124] "Node was previously registered" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.536654 kubelet[2683]: I0120 01:35:20.536758 2683 kubelet_node_status.go:78] "Successfully registered node" node="srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.576220 kubelet[2683]: I0120 01:35:20.576166 2683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.576611 kubelet[2683]: I0120 01:35:20.576579 2683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.583027 kubelet[2683]: I0120 01:35:20.581473 2683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.594442 kubelet[2683]: I0120 01:35:20.593725 2683 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:35:20.599132 kubelet[2683]: I0120 01:35:20.599098 2683 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:35:20.599434 kubelet[2683]: E0120 01:35:20.599330 2683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.600676 kubelet[2683]: I0120 01:35:20.600633 2683 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:35:20.600812 kubelet[2683]: E0120 01:35:20.600707 2683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-nmle2.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.610184 kubelet[2683]: I0120 01:35:20.610136 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4aed5d3eccdf848b10d5dd043a925a96-ca-certs\") pod \"kube-apiserver-srv-nmle2.gb1.brightbox.com\" (UID: \"4aed5d3eccdf848b10d5dd043a925a96\") " pod="kube-system/kube-apiserver-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.610184 kubelet[2683]: I0120 01:35:20.610193 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/674c2c1601b0d831409fbc9c81a379c0-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" (UID: \"674c2c1601b0d831409fbc9c81a379c0\") " pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.610184 kubelet[2683]: I0120 01:35:20.610312 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a08b046aca95c19058fdea864e8531b-kubeconfig\") pod \"kube-scheduler-srv-nmle2.gb1.brightbox.com\" (UID: \"4a08b046aca95c19058fdea864e8531b\") " pod="kube-system/kube-scheduler-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.610184 kubelet[2683]: I0120 01:35:20.610405 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4aed5d3eccdf848b10d5dd043a925a96-k8s-certs\") pod \"kube-apiserver-srv-nmle2.gb1.brightbox.com\" (UID: \"4aed5d3eccdf848b10d5dd043a925a96\") " pod="kube-system/kube-apiserver-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.610184 kubelet[2683]: I0120 01:35:20.610441 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4aed5d3eccdf848b10d5dd043a925a96-usr-share-ca-certificates\") pod \"kube-apiserver-srv-nmle2.gb1.brightbox.com\" (UID: \"4aed5d3eccdf848b10d5dd043a925a96\") " pod="kube-system/kube-apiserver-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.610998 kubelet[2683]: I0120 01:35:20.610479 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/674c2c1601b0d831409fbc9c81a379c0-ca-certs\") pod \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" (UID: \"674c2c1601b0d831409fbc9c81a379c0\") " pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.610998 kubelet[2683]: I0120 01:35:20.610508 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/674c2c1601b0d831409fbc9c81a379c0-flexvolume-dir\") pod \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" (UID: \"674c2c1601b0d831409fbc9c81a379c0\") " pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.610998 kubelet[2683]: I0120 01:35:20.610553 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/674c2c1601b0d831409fbc9c81a379c0-k8s-certs\") pod \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" (UID: \"674c2c1601b0d831409fbc9c81a379c0\") " pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:20.610998 kubelet[2683]: I0120 01:35:20.610582 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/674c2c1601b0d831409fbc9c81a379c0-kubeconfig\") pod \"kube-controller-manager-srv-nmle2.gb1.brightbox.com\" (UID: \"674c2c1601b0d831409fbc9c81a379c0\") " pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:21.145061 kubelet[2683]: I0120 01:35:21.144579 2683 apiserver.go:52] "Watching apiserver" Jan 20 01:35:21.208843 kubelet[2683]: I0120 01:35:21.208785 2683 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 01:35:21.331151 kubelet[2683]: I0120 01:35:21.331097 2683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:21.346294 kubelet[2683]: I0120 01:35:21.346252 2683 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:35:21.346492 kubelet[2683]: E0120 01:35:21.346421 2683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-nmle2.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-nmle2.gb1.brightbox.com" Jan 20 01:35:21.396843 kubelet[2683]: I0120 01:35:21.396501 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-nmle2.gb1.brightbox.com" podStartSLOduration=3.396460217 podStartE2EDuration="3.396460217s" podCreationTimestamp="2026-01-20 01:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:35:21.381820064 +0000 UTC m=+1.370193202" watchObservedRunningTime="2026-01-20 01:35:21.396460217 +0000 UTC m=+1.384833353" Jan 20 01:35:21.411736 kubelet[2683]: I0120 01:35:21.411616 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-nmle2.gb1.brightbox.com" podStartSLOduration=2.411592963 podStartE2EDuration="2.411592963s" podCreationTimestamp="2026-01-20 01:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:35:21.396879763 +0000 UTC m=+1.385252894" watchObservedRunningTime="2026-01-20 01:35:21.411592963 +0000 UTC m=+1.399966155" Jan 20 01:35:21.430672 kubelet[2683]: I0120 01:35:21.430085 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-nmle2.gb1.brightbox.com" podStartSLOduration=1.43006456 podStartE2EDuration="1.43006456s" podCreationTimestamp="2026-01-20 01:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:35:21.414001577 +0000 UTC m=+1.402374731" watchObservedRunningTime="2026-01-20 01:35:21.43006456 +0000 UTC m=+1.418437690" Jan 20 01:35:25.304683 kubelet[2683]: I0120 01:35:25.304614 2683 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:35:25.307860 kubelet[2683]: I0120 01:35:25.306513 2683 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:35:25.307947 containerd[1504]: time="2026-01-20T01:35:25.305518635Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:35:26.211076 systemd[1]: Created slice kubepods-besteffort-pod9a34ac0d_a9ad_48f9_a8ce_e458b8cabff6.slice - libcontainer container kubepods-besteffort-pod9a34ac0d_a9ad_48f9_a8ce_e458b8cabff6.slice. Jan 20 01:35:26.244875 kubelet[2683]: I0120 01:35:26.244809 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a34ac0d-a9ad-48f9-a8ce-e458b8cabff6-lib-modules\") pod \"kube-proxy-pj5zc\" (UID: \"9a34ac0d-a9ad-48f9-a8ce-e458b8cabff6\") " pod="kube-system/kube-proxy-pj5zc" Jan 20 01:35:26.244875 kubelet[2683]: I0120 01:35:26.244877 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5wwm\" (UniqueName: \"kubernetes.io/projected/9a34ac0d-a9ad-48f9-a8ce-e458b8cabff6-kube-api-access-b5wwm\") pod \"kube-proxy-pj5zc\" (UID: \"9a34ac0d-a9ad-48f9-a8ce-e458b8cabff6\") " pod="kube-system/kube-proxy-pj5zc" Jan 20 01:35:26.245205 kubelet[2683]: I0120 01:35:26.244930 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a34ac0d-a9ad-48f9-a8ce-e458b8cabff6-kube-proxy\") pod \"kube-proxy-pj5zc\" (UID: \"9a34ac0d-a9ad-48f9-a8ce-e458b8cabff6\") " pod="kube-system/kube-proxy-pj5zc" Jan 20 01:35:26.245205 kubelet[2683]: I0120 01:35:26.244977 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a34ac0d-a9ad-48f9-a8ce-e458b8cabff6-xtables-lock\") pod \"kube-proxy-pj5zc\" (UID: \"9a34ac0d-a9ad-48f9-a8ce-e458b8cabff6\") " pod="kube-system/kube-proxy-pj5zc" Jan 20 01:35:26.528887 containerd[1504]: time="2026-01-20T01:35:26.528572049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pj5zc,Uid:9a34ac0d-a9ad-48f9-a8ce-e458b8cabff6,Namespace:kube-system,Attempt:0,}" Jan 20 01:35:26.620787 containerd[1504]: time="2026-01-20T01:35:26.615381354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:35:26.620787 containerd[1504]: time="2026-01-20T01:35:26.615536165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:35:26.620787 containerd[1504]: time="2026-01-20T01:35:26.615593273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:26.625805 containerd[1504]: time="2026-01-20T01:35:26.625490528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:26.651056 kubelet[2683]: I0120 01:35:26.651007 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e3c8a168-29a7-4ced-907d-3b33cb160e30-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-lfmk5\" (UID: \"e3c8a168-29a7-4ced-907d-3b33cb160e30\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-lfmk5" Jan 20 01:35:26.651056 kubelet[2683]: I0120 01:35:26.651075 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wvk\" (UniqueName: \"kubernetes.io/projected/e3c8a168-29a7-4ced-907d-3b33cb160e30-kube-api-access-r7wvk\") pod \"tigera-operator-65cdcdfd6d-lfmk5\" (UID: \"e3c8a168-29a7-4ced-907d-3b33cb160e30\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-lfmk5" Jan 20 01:35:26.666699 systemd[1]: Created slice kubepods-besteffort-pode3c8a168_29a7_4ced_907d_3b33cb160e30.slice - libcontainer container kubepods-besteffort-pode3c8a168_29a7_4ced_907d_3b33cb160e30.slice. Jan 20 01:35:26.739219 systemd[1]: Started cri-containerd-ee4bb2ccdc3f33d88e1817a6bc706b605e37c15ed35e1a466406f0add4c7d2d2.scope - libcontainer container ee4bb2ccdc3f33d88e1817a6bc706b605e37c15ed35e1a466406f0add4c7d2d2. Jan 20 01:35:26.834666 containerd[1504]: time="2026-01-20T01:35:26.834420860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pj5zc,Uid:9a34ac0d-a9ad-48f9-a8ce-e458b8cabff6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee4bb2ccdc3f33d88e1817a6bc706b605e37c15ed35e1a466406f0add4c7d2d2\"" Jan 20 01:35:26.846085 containerd[1504]: time="2026-01-20T01:35:26.845820394Z" level=info msg="CreateContainer within sandbox \"ee4bb2ccdc3f33d88e1817a6bc706b605e37c15ed35e1a466406f0add4c7d2d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:35:26.867079 containerd[1504]: time="2026-01-20T01:35:26.866842703Z" level=info msg="CreateContainer within sandbox \"ee4bb2ccdc3f33d88e1817a6bc706b605e37c15ed35e1a466406f0add4c7d2d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17a39bb38362e251e28f196a8047248b870a5bf31c119e56eb26e69b3b7a6337\"" Jan 20 01:35:26.868968 containerd[1504]: time="2026-01-20T01:35:26.868003740Z" level=info msg="StartContainer for \"17a39bb38362e251e28f196a8047248b870a5bf31c119e56eb26e69b3b7a6337\"" Jan 20 01:35:26.907901 systemd[1]: Started cri-containerd-17a39bb38362e251e28f196a8047248b870a5bf31c119e56eb26e69b3b7a6337.scope - libcontainer container 17a39bb38362e251e28f196a8047248b870a5bf31c119e56eb26e69b3b7a6337. Jan 20 01:35:26.965860 containerd[1504]: time="2026-01-20T01:35:26.965578676Z" level=info msg="StartContainer for \"17a39bb38362e251e28f196a8047248b870a5bf31c119e56eb26e69b3b7a6337\" returns successfully" Jan 20 01:35:26.992989 containerd[1504]: time="2026-01-20T01:35:26.992034099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-lfmk5,Uid:e3c8a168-29a7-4ced-907d-3b33cb160e30,Namespace:tigera-operator,Attempt:0,}" Jan 20 01:35:27.038261 containerd[1504]: time="2026-01-20T01:35:27.037254746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:35:27.038261 containerd[1504]: time="2026-01-20T01:35:27.037405076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:35:27.038261 containerd[1504]: time="2026-01-20T01:35:27.037431787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:27.038261 containerd[1504]: time="2026-01-20T01:35:27.037785640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:27.073230 systemd[1]: Started cri-containerd-ca4bbb037817574b42cfe39cf3eae0e9eb16d23802bbd6b3d2178e3cfaa89767.scope - libcontainer container ca4bbb037817574b42cfe39cf3eae0e9eb16d23802bbd6b3d2178e3cfaa89767. Jan 20 01:35:27.168686 containerd[1504]: time="2026-01-20T01:35:27.168591903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-lfmk5,Uid:e3c8a168-29a7-4ced-907d-3b33cb160e30,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ca4bbb037817574b42cfe39cf3eae0e9eb16d23802bbd6b3d2178e3cfaa89767\"" Jan 20 01:35:27.172549 containerd[1504]: time="2026-01-20T01:35:27.172180674Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 01:35:27.388793 kubelet[2683]: I0120 01:35:27.388711 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pj5zc" podStartSLOduration=1.388686528 podStartE2EDuration="1.388686528s" podCreationTimestamp="2026-01-20 01:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:35:27.388511463 +0000 UTC m=+7.376884638" watchObservedRunningTime="2026-01-20 01:35:27.388686528 +0000 UTC m=+7.377059665" Jan 20 01:35:28.445365 systemd[1]: Started sshd@12-10.230.15.2:22-134.209.94.87:46880.service - OpenSSH per-connection server daemon (134.209.94.87:46880). Jan 20 01:35:28.744535 sshd[2988]: Connection closed by authenticating user root 134.209.94.87 port 46880 [preauth] Jan 20 01:35:28.747266 systemd[1]: sshd@12-10.230.15.2:22-134.209.94.87:46880.service: Deactivated successfully. Jan 20 01:35:29.277735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215946776.mount: Deactivated successfully. Jan 20 01:35:30.312414 containerd[1504]: time="2026-01-20T01:35:30.311056569Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:30.312414 containerd[1504]: time="2026-01-20T01:35:30.312319474Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 20 01:35:30.313364 containerd[1504]: time="2026-01-20T01:35:30.313291332Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:30.315916 containerd[1504]: time="2026-01-20T01:35:30.315882071Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:30.317178 containerd[1504]: time="2026-01-20T01:35:30.317135799Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.144897322s" Jan 20 01:35:30.317291 containerd[1504]: time="2026-01-20T01:35:30.317182112Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 01:35:30.345310 containerd[1504]: time="2026-01-20T01:35:30.345263555Z" level=info msg="CreateContainer within sandbox \"ca4bbb037817574b42cfe39cf3eae0e9eb16d23802bbd6b3d2178e3cfaa89767\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 01:35:30.364580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2225482232.mount: Deactivated successfully. Jan 20 01:35:30.368771 containerd[1504]: time="2026-01-20T01:35:30.368695924Z" level=info msg="CreateContainer within sandbox \"ca4bbb037817574b42cfe39cf3eae0e9eb16d23802bbd6b3d2178e3cfaa89767\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dd60c8b5b6a4b2fdcc5677cdf86bf2fe25828e0df6783d4eef3bcb0bceabb0ab\"" Jan 20 01:35:30.369431 containerd[1504]: time="2026-01-20T01:35:30.369373821Z" level=info msg="StartContainer for \"dd60c8b5b6a4b2fdcc5677cdf86bf2fe25828e0df6783d4eef3bcb0bceabb0ab\"" Jan 20 01:35:30.425214 systemd[1]: Started cri-containerd-dd60c8b5b6a4b2fdcc5677cdf86bf2fe25828e0df6783d4eef3bcb0bceabb0ab.scope - libcontainer container dd60c8b5b6a4b2fdcc5677cdf86bf2fe25828e0df6783d4eef3bcb0bceabb0ab. Jan 20 01:35:30.469701 containerd[1504]: time="2026-01-20T01:35:30.469609165Z" level=info msg="StartContainer for \"dd60c8b5b6a4b2fdcc5677cdf86bf2fe25828e0df6783d4eef3bcb0bceabb0ab\" returns successfully" Jan 20 01:35:38.369913 sudo[1762]: pam_unix(sudo:session): session closed for user root Jan 20 01:35:38.466855 sshd[1759]: pam_unix(sshd:session): session closed for user core Jan 20 01:35:38.475231 systemd-logind[1481]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:35:38.477733 systemd[1]: sshd@9-10.230.15.2:22-20.161.92.111:34310.service: Deactivated successfully. Jan 20 01:35:38.484913 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:35:38.486025 systemd[1]: session-11.scope: Consumed 6.874s CPU time, 144.6M memory peak, 0B memory swap peak. Jan 20 01:35:38.490084 systemd-logind[1481]: Removed session 11. Jan 20 01:35:46.680704 kubelet[2683]: I0120 01:35:46.676827 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-lfmk5" podStartSLOduration=17.526928691 podStartE2EDuration="20.676758889s" podCreationTimestamp="2026-01-20 01:35:26 +0000 UTC" firstStartedPulling="2026-01-20 01:35:27.170771755 +0000 UTC m=+7.159144883" lastFinishedPulling="2026-01-20 01:35:30.320601955 +0000 UTC m=+10.308975081" observedRunningTime="2026-01-20 01:35:31.428143113 +0000 UTC m=+11.416516248" watchObservedRunningTime="2026-01-20 01:35:46.676758889 +0000 UTC m=+26.665132026" Jan 20 01:35:46.708982 systemd[1]: Created slice kubepods-besteffort-pod660cd4ad_b32e_4696_846d_6c0fbbe83d22.slice - libcontainer container kubepods-besteffort-pod660cd4ad_b32e_4696_846d_6c0fbbe83d22.slice. Jan 20 01:35:46.803228 kubelet[2683]: I0120 01:35:46.803034 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/660cd4ad-b32e-4696-846d-6c0fbbe83d22-typha-certs\") pod \"calico-typha-57dff764d4-vm74r\" (UID: \"660cd4ad-b32e-4696-846d-6c0fbbe83d22\") " pod="calico-system/calico-typha-57dff764d4-vm74r" Jan 20 01:35:46.803228 kubelet[2683]: I0120 01:35:46.803151 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660cd4ad-b32e-4696-846d-6c0fbbe83d22-tigera-ca-bundle\") pod \"calico-typha-57dff764d4-vm74r\" (UID: \"660cd4ad-b32e-4696-846d-6c0fbbe83d22\") " pod="calico-system/calico-typha-57dff764d4-vm74r" Jan 20 01:35:46.803228 kubelet[2683]: I0120 01:35:46.803196 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczp4\" (UniqueName: \"kubernetes.io/projected/660cd4ad-b32e-4696-846d-6c0fbbe83d22-kube-api-access-vczp4\") pod \"calico-typha-57dff764d4-vm74r\" (UID: \"660cd4ad-b32e-4696-846d-6c0fbbe83d22\") " pod="calico-system/calico-typha-57dff764d4-vm74r" Jan 20 01:35:46.927378 systemd[1]: Created slice kubepods-besteffort-podae04f041_7ea5_4e96_846f_d6f46ef5a64b.slice - libcontainer container kubepods-besteffort-podae04f041_7ea5_4e96_846f_d6f46ef5a64b.slice. Jan 20 01:35:47.006199 kubelet[2683]: I0120 01:35:47.003870 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-var-lib-calico\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.006199 kubelet[2683]: I0120 01:35:47.003963 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-node-certs\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.006199 kubelet[2683]: I0120 01:35:47.004048 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-tigera-ca-bundle\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.006199 kubelet[2683]: I0120 01:35:47.004129 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-flexvol-driver-host\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.006199 kubelet[2683]: I0120 01:35:47.004225 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-policysync\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.006645 kubelet[2683]: I0120 01:35:47.004270 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-xtables-lock\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.006645 kubelet[2683]: I0120 01:35:47.004299 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-cni-net-dir\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.006645 kubelet[2683]: I0120 01:35:47.004336 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-lib-modules\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.006645 kubelet[2683]: I0120 01:35:47.004361 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-cni-bin-dir\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.006645 kubelet[2683]: I0120 01:35:47.004391 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-cni-log-dir\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.008754 kubelet[2683]: I0120 01:35:47.004483 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-var-run-calico\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.008754 kubelet[2683]: I0120 01:35:47.004527 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52bbd\" (UniqueName: \"kubernetes.io/projected/ae04f041-7ea5-4e96-846f-d6f46ef5a64b-kube-api-access-52bbd\") pod \"calico-node-l79ft\" (UID: \"ae04f041-7ea5-4e96-846f-d6f46ef5a64b\") " pod="calico-system/calico-node-l79ft" Jan 20 01:35:47.028982 containerd[1504]: time="2026-01-20T01:35:47.028845684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57dff764d4-vm74r,Uid:660cd4ad-b32e-4696-846d-6c0fbbe83d22,Namespace:calico-system,Attempt:0,}" Jan 20 01:35:47.097072 kubelet[2683]: E0120 01:35:47.096981 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:35:47.134984 kubelet[2683]: E0120 01:35:47.134375 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.134984 kubelet[2683]: W0120 01:35:47.134427 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.134984 kubelet[2683]: E0120 01:35:47.134573 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.180797 kubelet[2683]: E0120 01:35:47.179719 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.180797 kubelet[2683]: W0120 01:35:47.179754 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.180797 kubelet[2683]: E0120 01:35:47.179851 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.184133 containerd[1504]: time="2026-01-20T01:35:47.178667307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:35:47.184133 containerd[1504]: time="2026-01-20T01:35:47.178797501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:35:47.184133 containerd[1504]: time="2026-01-20T01:35:47.178815686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:47.184133 containerd[1504]: time="2026-01-20T01:35:47.179012783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:47.209439 kubelet[2683]: E0120 01:35:47.209369 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.209439 kubelet[2683]: W0120 01:35:47.209400 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.209439 kubelet[2683]: E0120 01:35:47.209430 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.210471 kubelet[2683]: I0120 01:35:47.209471 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8kmd\" (UniqueName: \"kubernetes.io/projected/fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9-kube-api-access-z8kmd\") pod \"csi-node-driver-wdqf6\" (UID: \"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9\") " pod="calico-system/csi-node-driver-wdqf6" Jan 20 01:35:47.211046 kubelet[2683]: E0120 01:35:47.211018 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.211046 kubelet[2683]: W0120 01:35:47.211042 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.212179 kubelet[2683]: E0120 01:35:47.211059 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.212179 kubelet[2683]: I0120 01:35:47.211092 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9-kubelet-dir\") pod \"csi-node-driver-wdqf6\" (UID: \"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9\") " pod="calico-system/csi-node-driver-wdqf6" Jan 20 01:35:47.213370 kubelet[2683]: E0120 01:35:47.213334 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.213370 kubelet[2683]: W0120 01:35:47.213357 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.213716 kubelet[2683]: E0120 01:35:47.213376 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.213716 kubelet[2683]: I0120 01:35:47.213573 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9-varrun\") pod \"csi-node-driver-wdqf6\" (UID: \"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9\") " pod="calico-system/csi-node-driver-wdqf6" Jan 20 01:35:47.215173 kubelet[2683]: E0120 01:35:47.215113 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.215173 kubelet[2683]: W0120 01:35:47.215157 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.215173 kubelet[2683]: E0120 01:35:47.215175 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.217300 kubelet[2683]: E0120 01:35:47.217274 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.217300 kubelet[2683]: W0120 01:35:47.217296 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.218055 kubelet[2683]: E0120 01:35:47.217313 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.220006 kubelet[2683]: E0120 01:35:47.219429 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.220006 kubelet[2683]: W0120 01:35:47.219454 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.220006 kubelet[2683]: E0120 01:35:47.219481 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.221068 kubelet[2683]: I0120 01:35:47.220993 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9-socket-dir\") pod \"csi-node-driver-wdqf6\" (UID: \"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9\") " pod="calico-system/csi-node-driver-wdqf6" Jan 20 01:35:47.221374 kubelet[2683]: E0120 01:35:47.221354 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.221563 kubelet[2683]: W0120 01:35:47.221505 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.221563 kubelet[2683]: E0120 01:35:47.221531 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.225186 kubelet[2683]: E0120 01:35:47.224382 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.225186 kubelet[2683]: W0120 01:35:47.224403 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.225186 kubelet[2683]: E0120 01:35:47.224422 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.226991 kubelet[2683]: E0120 01:35:47.226797 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.226991 kubelet[2683]: W0120 01:35:47.226819 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.226991 kubelet[2683]: E0120 01:35:47.226836 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.227398 kubelet[2683]: E0120 01:35:47.227217 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.227398 kubelet[2683]: W0120 01:35:47.227232 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.227398 kubelet[2683]: E0120 01:35:47.227247 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.229975 kubelet[2683]: E0120 01:35:47.228351 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.229975 kubelet[2683]: W0120 01:35:47.228370 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.229975 kubelet[2683]: E0120 01:35:47.228387 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.230457 kubelet[2683]: E0120 01:35:47.230240 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.230457 kubelet[2683]: W0120 01:35:47.230260 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.230457 kubelet[2683]: E0120 01:35:47.230276 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.233477 kubelet[2683]: E0120 01:35:47.233075 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.233477 kubelet[2683]: W0120 01:35:47.233103 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.233477 kubelet[2683]: E0120 01:35:47.233123 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.233477 kubelet[2683]: I0120 01:35:47.233165 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9-registration-dir\") pod \"csi-node-driver-wdqf6\" (UID: \"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9\") " pod="calico-system/csi-node-driver-wdqf6" Jan 20 01:35:47.234083 kubelet[2683]: E0120 01:35:47.233507 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.234083 kubelet[2683]: W0120 01:35:47.233576 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.234083 kubelet[2683]: E0120 01:35:47.233592 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.234083 kubelet[2683]: E0120 01:35:47.234077 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.234273 kubelet[2683]: W0120 01:35:47.234092 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.234273 kubelet[2683]: E0120 01:35:47.234108 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.239953 containerd[1504]: time="2026-01-20T01:35:47.238890667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l79ft,Uid:ae04f041-7ea5-4e96-846f-d6f46ef5a64b,Namespace:calico-system,Attempt:0,}" Jan 20 01:35:47.297517 systemd[1]: Started cri-containerd-3222a0956995315f5337b791e0ef2b8c5a2dae715c4388620af57f5680dc7018.scope - libcontainer container 3222a0956995315f5337b791e0ef2b8c5a2dae715c4388620af57f5680dc7018. Jan 20 01:35:47.337100 kubelet[2683]: E0120 01:35:47.337056 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.340053 kubelet[2683]: W0120 01:35:47.337378 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.340053 kubelet[2683]: E0120 01:35:47.337418 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.343017 kubelet[2683]: E0120 01:35:47.341145 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.344854 kubelet[2683]: W0120 01:35:47.341583 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.348507 kubelet[2683]: E0120 01:35:47.348442 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.353047 kubelet[2683]: E0120 01:35:47.351111 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.353047 kubelet[2683]: W0120 01:35:47.351181 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.353047 kubelet[2683]: E0120 01:35:47.351205 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.354172 kubelet[2683]: E0120 01:35:47.354124 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.354172 kubelet[2683]: W0120 01:35:47.354148 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.354433 kubelet[2683]: E0120 01:35:47.354179 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.357280 kubelet[2683]: E0120 01:35:47.357245 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.357280 kubelet[2683]: W0120 01:35:47.357275 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.357473 kubelet[2683]: E0120 01:35:47.357443 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.359929 containerd[1504]: time="2026-01-20T01:35:47.357668558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:35:47.359929 containerd[1504]: time="2026-01-20T01:35:47.357847952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:35:47.359929 containerd[1504]: time="2026-01-20T01:35:47.357875185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:47.363432 kubelet[2683]: E0120 01:35:47.363210 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.363432 kubelet[2683]: W0120 01:35:47.363236 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.363432 kubelet[2683]: E0120 01:35:47.363255 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.366467 containerd[1504]: time="2026-01-20T01:35:47.360863397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:35:47.367262 kubelet[2683]: E0120 01:35:47.367221 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.367262 kubelet[2683]: W0120 01:35:47.367260 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.367411 kubelet[2683]: E0120 01:35:47.367285 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.371768 kubelet[2683]: E0120 01:35:47.371733 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.372364 kubelet[2683]: W0120 01:35:47.372332 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.372963 kubelet[2683]: E0120 01:35:47.372578 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.379777 kubelet[2683]: E0120 01:35:47.379370 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.379777 kubelet[2683]: W0120 01:35:47.379395 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.379777 kubelet[2683]: E0120 01:35:47.379417 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.380709 kubelet[2683]: E0120 01:35:47.380511 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.380709 kubelet[2683]: W0120 01:35:47.380543 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.380709 kubelet[2683]: E0120 01:35:47.380562 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.383864 kubelet[2683]: E0120 01:35:47.383748 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.384503 kubelet[2683]: W0120 01:35:47.384115 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.384503 kubelet[2683]: E0120 01:35:47.384145 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.387814 kubelet[2683]: E0120 01:35:47.387381 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.387814 kubelet[2683]: W0120 01:35:47.387402 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.387814 kubelet[2683]: E0120 01:35:47.387420 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.389815 kubelet[2683]: E0120 01:35:47.389608 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.390998 kubelet[2683]: W0120 01:35:47.390916 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.390998 kubelet[2683]: E0120 01:35:47.390972 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.391802 kubelet[2683]: E0120 01:35:47.391559 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.391802 kubelet[2683]: W0120 01:35:47.391579 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.391802 kubelet[2683]: E0120 01:35:47.391596 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.394977 kubelet[2683]: E0120 01:35:47.394881 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.395278 kubelet[2683]: W0120 01:35:47.395252 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.395614 kubelet[2683]: E0120 01:35:47.395518 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.397527 kubelet[2683]: E0120 01:35:47.396917 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.397527 kubelet[2683]: W0120 01:35:47.396938 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.397527 kubelet[2683]: E0120 01:35:47.397387 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.398384 kubelet[2683]: E0120 01:35:47.398213 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.398384 kubelet[2683]: W0120 01:35:47.398234 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.398731 kubelet[2683]: E0120 01:35:47.398546 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.399425 kubelet[2683]: E0120 01:35:47.399360 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.399425 kubelet[2683]: W0120 01:35:47.399381 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.399800 kubelet[2683]: E0120 01:35:47.399516 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.400483 kubelet[2683]: E0120 01:35:47.400449 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.400745 kubelet[2683]: W0120 01:35:47.400647 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.400745 kubelet[2683]: E0120 01:35:47.400682 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.402559 kubelet[2683]: E0120 01:35:47.402397 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.402559 kubelet[2683]: W0120 01:35:47.402418 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.402559 kubelet[2683]: E0120 01:35:47.402434 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.404209 kubelet[2683]: E0120 01:35:47.404050 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.404209 kubelet[2683]: W0120 01:35:47.404071 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.404209 kubelet[2683]: E0120 01:35:47.404089 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.404861 kubelet[2683]: E0120 01:35:47.404619 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.404861 kubelet[2683]: W0120 01:35:47.404639 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.404861 kubelet[2683]: E0120 01:35:47.404656 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.405636 kubelet[2683]: E0120 01:35:47.405473 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.405636 kubelet[2683]: W0120 01:35:47.405494 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.405636 kubelet[2683]: E0120 01:35:47.405511 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.408637 kubelet[2683]: E0120 01:35:47.408421 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.408637 kubelet[2683]: W0120 01:35:47.408444 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.408637 kubelet[2683]: E0120 01:35:47.408461 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.411493 kubelet[2683]: E0120 01:35:47.411070 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.411493 kubelet[2683]: W0120 01:35:47.411093 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.411493 kubelet[2683]: E0120 01:35:47.411123 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.425980 kubelet[2683]: E0120 01:35:47.425050 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:47.426269 kubelet[2683]: W0120 01:35:47.426182 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:47.426269 kubelet[2683]: E0120 01:35:47.426218 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:47.441444 systemd[1]: Started cri-containerd-06689a5296ed8811d0c0f1f6a7cefc89f5eecc1e8b760b412353818d5f0cbc1d.scope - libcontainer container 06689a5296ed8811d0c0f1f6a7cefc89f5eecc1e8b760b412353818d5f0cbc1d. Jan 20 01:35:47.487611 containerd[1504]: time="2026-01-20T01:35:47.487527928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57dff764d4-vm74r,Uid:660cd4ad-b32e-4696-846d-6c0fbbe83d22,Namespace:calico-system,Attempt:0,} returns sandbox id \"3222a0956995315f5337b791e0ef2b8c5a2dae715c4388620af57f5680dc7018\"" Jan 20 01:35:47.494818 containerd[1504]: time="2026-01-20T01:35:47.494764144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 01:35:47.526001 containerd[1504]: time="2026-01-20T01:35:47.525165338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l79ft,Uid:ae04f041-7ea5-4e96-846f-d6f46ef5a64b,Namespace:calico-system,Attempt:0,} returns sandbox id \"06689a5296ed8811d0c0f1f6a7cefc89f5eecc1e8b760b412353818d5f0cbc1d\"" Jan 20 01:35:48.991608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454181296.mount: Deactivated successfully. Jan 20 01:35:49.275915 kubelet[2683]: E0120 01:35:49.275676 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:35:51.031260 containerd[1504]: time="2026-01-20T01:35:51.031138469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:51.033241 containerd[1504]: time="2026-01-20T01:35:51.033188282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 20 01:35:51.034064 containerd[1504]: time="2026-01-20T01:35:51.034007400Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:51.036818 containerd[1504]: time="2026-01-20T01:35:51.036776410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:51.039072 containerd[1504]: time="2026-01-20T01:35:51.038890555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.54406617s" Jan 20 01:35:51.039072 containerd[1504]: time="2026-01-20T01:35:51.038959759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 01:35:51.042730 containerd[1504]: time="2026-01-20T01:35:51.041336868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 01:35:51.074355 containerd[1504]: time="2026-01-20T01:35:51.074288161Z" level=info msg="CreateContainer within sandbox \"3222a0956995315f5337b791e0ef2b8c5a2dae715c4388620af57f5680dc7018\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 01:35:51.090907 containerd[1504]: time="2026-01-20T01:35:51.090009047Z" level=info msg="CreateContainer within sandbox \"3222a0956995315f5337b791e0ef2b8c5a2dae715c4388620af57f5680dc7018\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5ff4435ee5847d680d02c552494a3f9f3089ea67fc78d04727076eaab4dfb98a\"" Jan 20 01:35:51.093748 containerd[1504]: time="2026-01-20T01:35:51.092756015Z" level=info msg="StartContainer for \"5ff4435ee5847d680d02c552494a3f9f3089ea67fc78d04727076eaab4dfb98a\"" Jan 20 01:35:51.183233 systemd[1]: Started cri-containerd-5ff4435ee5847d680d02c552494a3f9f3089ea67fc78d04727076eaab4dfb98a.scope - libcontainer container 5ff4435ee5847d680d02c552494a3f9f3089ea67fc78d04727076eaab4dfb98a. Jan 20 01:35:51.252048 containerd[1504]: time="2026-01-20T01:35:51.251967657Z" level=info msg="StartContainer for \"5ff4435ee5847d680d02c552494a3f9f3089ea67fc78d04727076eaab4dfb98a\" returns successfully" Jan 20 01:35:51.275845 kubelet[2683]: E0120 01:35:51.275214 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:35:51.543634 kubelet[2683]: E0120 01:35:51.543543 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.543634 kubelet[2683]: W0120 01:35:51.543625 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.543985 kubelet[2683]: E0120 01:35:51.543701 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.544993 kubelet[2683]: E0120 01:35:51.544143 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.544993 kubelet[2683]: W0120 01:35:51.544164 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.544993 kubelet[2683]: E0120 01:35:51.544181 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.546299 kubelet[2683]: E0120 01:35:51.546147 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.546299 kubelet[2683]: W0120 01:35:51.546168 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.546299 kubelet[2683]: E0120 01:35:51.546186 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.548096 kubelet[2683]: E0120 01:35:51.546961 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.548096 kubelet[2683]: W0120 01:35:51.546982 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.548096 kubelet[2683]: E0120 01:35:51.546999 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.548596 kubelet[2683]: E0120 01:35:51.548456 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.548596 kubelet[2683]: W0120 01:35:51.548476 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.548596 kubelet[2683]: E0120 01:35:51.548514 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.549438 kubelet[2683]: E0120 01:35:51.549165 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.549438 kubelet[2683]: W0120 01:35:51.549181 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.549438 kubelet[2683]: E0120 01:35:51.549197 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.550622 kubelet[2683]: E0120 01:35:51.550177 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.550622 kubelet[2683]: W0120 01:35:51.550208 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.550622 kubelet[2683]: E0120 01:35:51.550225 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.552070 kubelet[2683]: E0120 01:35:51.552047 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.552208 kubelet[2683]: W0120 01:35:51.552186 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.552427 kubelet[2683]: E0120 01:35:51.552403 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.553186 kubelet[2683]: E0120 01:35:51.553162 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.553462 kubelet[2683]: W0120 01:35:51.553306 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.553462 kubelet[2683]: E0120 01:35:51.553347 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.554415 kubelet[2683]: E0120 01:35:51.554211 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.554415 kubelet[2683]: W0120 01:35:51.554348 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.554415 kubelet[2683]: E0120 01:35:51.554368 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.556548 kubelet[2683]: E0120 01:35:51.556251 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.556763 kubelet[2683]: W0120 01:35:51.556695 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.556924 kubelet[2683]: E0120 01:35:51.556736 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.557390 kubelet[2683]: E0120 01:35:51.557362 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.557732 kubelet[2683]: W0120 01:35:51.557513 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.557732 kubelet[2683]: E0120 01:35:51.557539 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.559098 kubelet[2683]: E0120 01:35:51.558879 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.559384 kubelet[2683]: W0120 01:35:51.559112 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.559384 kubelet[2683]: E0120 01:35:51.559141 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.560080 kubelet[2683]: E0120 01:35:51.560048 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.560080 kubelet[2683]: W0120 01:35:51.560068 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.560381 kubelet[2683]: E0120 01:35:51.560086 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.560656 kubelet[2683]: E0120 01:35:51.560605 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.560656 kubelet[2683]: W0120 01:35:51.560624 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.560656 kubelet[2683]: E0120 01:35:51.560641 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.621427 kubelet[2683]: E0120 01:35:51.621375 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.621427 kubelet[2683]: W0120 01:35:51.621415 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.621427 kubelet[2683]: E0120 01:35:51.621446 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.622847 kubelet[2683]: E0120 01:35:51.621793 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.622847 kubelet[2683]: W0120 01:35:51.621808 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.622847 kubelet[2683]: E0120 01:35:51.621824 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.622847 kubelet[2683]: E0120 01:35:51.622161 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.622847 kubelet[2683]: W0120 01:35:51.622176 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.622847 kubelet[2683]: E0120 01:35:51.622192 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.622847 kubelet[2683]: E0120 01:35:51.622519 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.622847 kubelet[2683]: W0120 01:35:51.622542 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.622847 kubelet[2683]: E0120 01:35:51.622559 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.626012 kubelet[2683]: E0120 01:35:51.622870 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.626012 kubelet[2683]: W0120 01:35:51.622893 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.626012 kubelet[2683]: E0120 01:35:51.622908 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.626012 kubelet[2683]: E0120 01:35:51.623202 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.626012 kubelet[2683]: W0120 01:35:51.623216 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.626012 kubelet[2683]: E0120 01:35:51.623238 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.626012 kubelet[2683]: E0120 01:35:51.624228 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.626012 kubelet[2683]: W0120 01:35:51.624245 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.626012 kubelet[2683]: E0120 01:35:51.624261 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.626827 kubelet[2683]: E0120 01:35:51.626186 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.626827 kubelet[2683]: W0120 01:35:51.626206 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.626827 kubelet[2683]: E0120 01:35:51.626224 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.627502 kubelet[2683]: E0120 01:35:51.627139 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.627502 kubelet[2683]: W0120 01:35:51.627164 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.627502 kubelet[2683]: E0120 01:35:51.627181 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.628352 kubelet[2683]: E0120 01:35:51.628330 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.628552 kubelet[2683]: W0120 01:35:51.628400 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.628552 kubelet[2683]: E0120 01:35:51.628422 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.630225 kubelet[2683]: E0120 01:35:51.629377 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.630225 kubelet[2683]: W0120 01:35:51.629398 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.630225 kubelet[2683]: E0120 01:35:51.629415 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.630601 kubelet[2683]: E0120 01:35:51.630509 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.630601 kubelet[2683]: W0120 01:35:51.630529 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.630601 kubelet[2683]: E0120 01:35:51.630546 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.632060 kubelet[2683]: E0120 01:35:51.632021 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.632145 kubelet[2683]: W0120 01:35:51.632072 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.632145 kubelet[2683]: E0120 01:35:51.632091 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.632472 kubelet[2683]: E0120 01:35:51.632446 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.632472 kubelet[2683]: W0120 01:35:51.632467 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.632682 kubelet[2683]: E0120 01:35:51.632484 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.632817 kubelet[2683]: E0120 01:35:51.632794 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.632817 kubelet[2683]: W0120 01:35:51.632814 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.633198 kubelet[2683]: E0120 01:35:51.632831 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.633401 kubelet[2683]: E0120 01:35:51.633379 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.633732 kubelet[2683]: W0120 01:35:51.633503 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.633732 kubelet[2683]: E0120 01:35:51.633545 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.636277 kubelet[2683]: E0120 01:35:51.636252 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.636277 kubelet[2683]: W0120 01:35:51.636275 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.636605 kubelet[2683]: E0120 01:35:51.636294 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:51.636923 kubelet[2683]: E0120 01:35:51.636899 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:51.636923 kubelet[2683]: W0120 01:35:51.636920 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:51.637097 kubelet[2683]: E0120 01:35:51.636956 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.512104 kubelet[2683]: I0120 01:35:52.512042 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:35:52.556623 containerd[1504]: time="2026-01-20T01:35:52.556501608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:52.557953 containerd[1504]: time="2026-01-20T01:35:52.557784930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 20 01:35:52.566508 containerd[1504]: time="2026-01-20T01:35:52.566465677Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:52.567518 kubelet[2683]: E0120 01:35:52.567474 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.567684 kubelet[2683]: W0120 01:35:52.567654 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.567878 kubelet[2683]: E0120 01:35:52.567749 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.568994 kubelet[2683]: E0120 01:35:52.568674 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.568994 kubelet[2683]: W0120 01:35:52.568696 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.568994 kubelet[2683]: E0120 01:35:52.568715 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.570153 kubelet[2683]: E0120 01:35:52.569038 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.570153 kubelet[2683]: W0120 01:35:52.569052 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.570153 kubelet[2683]: E0120 01:35:52.569067 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.570754 kubelet[2683]: E0120 01:35:52.570732 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.570969 kubelet[2683]: W0120 01:35:52.570918 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.571130 kubelet[2683]: E0120 01:35:52.571081 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.572711 kubelet[2683]: E0120 01:35:52.571639 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.572711 kubelet[2683]: W0120 01:35:52.571658 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.572711 kubelet[2683]: E0120 01:35:52.571675 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.572711 kubelet[2683]: E0120 01:35:52.572612 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.572711 kubelet[2683]: W0120 01:35:52.572629 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.572711 kubelet[2683]: E0120 01:35:52.572644 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.573327 containerd[1504]: time="2026-01-20T01:35:52.571960973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:52.573699 kubelet[2683]: E0120 01:35:52.573468 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.573699 kubelet[2683]: W0120 01:35:52.573487 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.573699 kubelet[2683]: E0120 01:35:52.573504 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.574170 kubelet[2683]: E0120 01:35:52.574150 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.574238 containerd[1504]: time="2026-01-20T01:35:52.574143301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.532727929s" Jan 20 01:35:52.574238 containerd[1504]: time="2026-01-20T01:35:52.574192749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 01:35:52.574925 kubelet[2683]: W0120 01:35:52.574391 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.574925 kubelet[2683]: E0120 01:35:52.574427 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.575403 kubelet[2683]: E0120 01:35:52.575370 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.575563 kubelet[2683]: W0120 01:35:52.575517 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.575814 kubelet[2683]: E0120 01:35:52.575672 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.577161 kubelet[2683]: E0120 01:35:52.577017 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.577161 kubelet[2683]: W0120 01:35:52.577036 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.577161 kubelet[2683]: E0120 01:35:52.577052 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.577597 kubelet[2683]: E0120 01:35:52.577429 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.577597 kubelet[2683]: W0120 01:35:52.577448 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.577597 kubelet[2683]: E0120 01:35:52.577464 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.577846 kubelet[2683]: E0120 01:35:52.577827 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.578121 kubelet[2683]: W0120 01:35:52.577961 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.578121 kubelet[2683]: E0120 01:35:52.578011 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.578758 kubelet[2683]: E0120 01:35:52.578616 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.578758 kubelet[2683]: W0120 01:35:52.578675 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.578758 kubelet[2683]: E0120 01:35:52.578694 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.579661 kubelet[2683]: E0120 01:35:52.579612 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.580377 kubelet[2683]: W0120 01:35:52.579755 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.580377 kubelet[2683]: E0120 01:35:52.579776 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.580840 kubelet[2683]: E0120 01:35:52.580822 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.581195 kubelet[2683]: W0120 01:35:52.581033 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.581195 kubelet[2683]: E0120 01:35:52.581058 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.583679 containerd[1504]: time="2026-01-20T01:35:52.583629509Z" level=info msg="CreateContainer within sandbox \"06689a5296ed8811d0c0f1f6a7cefc89f5eecc1e8b760b412353818d5f0cbc1d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 01:35:52.630741 kubelet[2683]: E0120 01:35:52.630469 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.630741 kubelet[2683]: W0120 01:35:52.630496 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.630741 kubelet[2683]: E0120 01:35:52.630520 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.631330 kubelet[2683]: E0120 01:35:52.631236 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.631330 kubelet[2683]: W0120 01:35:52.631255 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.631330 kubelet[2683]: E0120 01:35:52.631272 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.631681 kubelet[2683]: E0120 01:35:52.631655 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.631681 kubelet[2683]: W0120 01:35:52.631679 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.631862 kubelet[2683]: E0120 01:35:52.631697 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.634033 kubelet[2683]: E0120 01:35:52.632030 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.634033 kubelet[2683]: W0120 01:35:52.632045 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.634033 kubelet[2683]: E0120 01:35:52.632061 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.634033 kubelet[2683]: E0120 01:35:52.632397 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.634033 kubelet[2683]: W0120 01:35:52.632426 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.634033 kubelet[2683]: E0120 01:35:52.632452 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.634033 kubelet[2683]: E0120 01:35:52.632847 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.634033 kubelet[2683]: W0120 01:35:52.632861 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.634033 kubelet[2683]: E0120 01:35:52.632887 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.634033 kubelet[2683]: E0120 01:35:52.633230 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.634724 kubelet[2683]: W0120 01:35:52.633245 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.634724 kubelet[2683]: E0120 01:35:52.633260 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.634724 kubelet[2683]: E0120 01:35:52.633603 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.634724 kubelet[2683]: W0120 01:35:52.633618 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.634724 kubelet[2683]: E0120 01:35:52.633634 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.634724 kubelet[2683]: E0120 01:35:52.634014 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.634724 kubelet[2683]: W0120 01:35:52.634119 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.634724 kubelet[2683]: E0120 01:35:52.634136 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.634724 kubelet[2683]: E0120 01:35:52.634525 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.634724 kubelet[2683]: W0120 01:35:52.634560 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.635623 kubelet[2683]: E0120 01:35:52.635030 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.635623 kubelet[2683]: E0120 01:35:52.635393 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.635623 kubelet[2683]: W0120 01:35:52.635409 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.635623 kubelet[2683]: E0120 01:35:52.635425 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.636880 kubelet[2683]: E0120 01:35:52.636855 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.636880 kubelet[2683]: W0120 01:35:52.636879 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.637136 kubelet[2683]: E0120 01:35:52.636897 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.637608 kubelet[2683]: E0120 01:35:52.637550 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.637608 kubelet[2683]: W0120 01:35:52.637589 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.637608 kubelet[2683]: E0120 01:35:52.637607 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.638074 kubelet[2683]: E0120 01:35:52.638051 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.638213 kubelet[2683]: W0120 01:35:52.638075 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.638213 kubelet[2683]: E0120 01:35:52.638093 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.638538 kubelet[2683]: E0120 01:35:52.638516 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.638637 kubelet[2683]: W0120 01:35:52.638537 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.638637 kubelet[2683]: E0120 01:35:52.638630 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.639136 kubelet[2683]: E0120 01:35:52.639114 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.639136 kubelet[2683]: W0120 01:35:52.639135 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.639390 kubelet[2683]: E0120 01:35:52.639151 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.644517 kubelet[2683]: E0120 01:35:52.643684 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.644517 kubelet[2683]: W0120 01:35:52.643705 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.644517 kubelet[2683]: E0120 01:35:52.643722 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.645358 kubelet[2683]: E0120 01:35:52.645337 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:35:52.645577 kubelet[2683]: W0120 01:35:52.645551 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:35:52.645696 kubelet[2683]: E0120 01:35:52.645656 2683 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:35:52.653060 containerd[1504]: time="2026-01-20T01:35:52.652888307Z" level=info msg="CreateContainer within sandbox \"06689a5296ed8811d0c0f1f6a7cefc89f5eecc1e8b760b412353818d5f0cbc1d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"89030c34dac974ef4b28ceefa57368c78fd1fa240dbee9720572b54cad9629b5\"" Jan 20 01:35:52.655062 containerd[1504]: time="2026-01-20T01:35:52.655013943Z" level=info msg="StartContainer for \"89030c34dac974ef4b28ceefa57368c78fd1fa240dbee9720572b54cad9629b5\"" Jan 20 01:35:52.656757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821925433.mount: Deactivated successfully. Jan 20 01:35:52.749220 systemd[1]: Started cri-containerd-89030c34dac974ef4b28ceefa57368c78fd1fa240dbee9720572b54cad9629b5.scope - libcontainer container 89030c34dac974ef4b28ceefa57368c78fd1fa240dbee9720572b54cad9629b5. Jan 20 01:35:52.800474 containerd[1504]: time="2026-01-20T01:35:52.800266383Z" level=info msg="StartContainer for \"89030c34dac974ef4b28ceefa57368c78fd1fa240dbee9720572b54cad9629b5\" returns successfully" Jan 20 01:35:52.829924 systemd[1]: cri-containerd-89030c34dac974ef4b28ceefa57368c78fd1fa240dbee9720572b54cad9629b5.scope: Deactivated successfully. Jan 20 01:35:53.043079 containerd[1504]: time="2026-01-20T01:35:53.018254063Z" level=info msg="shim disconnected" id=89030c34dac974ef4b28ceefa57368c78fd1fa240dbee9720572b54cad9629b5 namespace=k8s.io Jan 20 01:35:53.043079 containerd[1504]: time="2026-01-20T01:35:53.043060629Z" level=warning msg="cleaning up after shim disconnected" id=89030c34dac974ef4b28ceefa57368c78fd1fa240dbee9720572b54cad9629b5 namespace=k8s.io Jan 20 01:35:53.043079 containerd[1504]: time="2026-01-20T01:35:53.043114891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:35:53.054242 systemd[1]: run-containerd-runc-k8s.io-89030c34dac974ef4b28ceefa57368c78fd1fa240dbee9720572b54cad9629b5-runc.J9CROO.mount: Deactivated successfully. Jan 20 01:35:53.054459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89030c34dac974ef4b28ceefa57368c78fd1fa240dbee9720572b54cad9629b5-rootfs.mount: Deactivated successfully. Jan 20 01:35:53.275605 kubelet[2683]: E0120 01:35:53.275338 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:35:53.529979 containerd[1504]: time="2026-01-20T01:35:53.527867419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 01:35:53.561019 kubelet[2683]: I0120 01:35:53.560104 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57dff764d4-vm74r" podStartSLOduration=4.011467659 podStartE2EDuration="7.558845533s" podCreationTimestamp="2026-01-20 01:35:46 +0000 UTC" firstStartedPulling="2026-01-20 01:35:47.492469097 +0000 UTC m=+27.480842226" lastFinishedPulling="2026-01-20 01:35:51.039846972 +0000 UTC m=+31.028220100" observedRunningTime="2026-01-20 01:35:51.608635983 +0000 UTC m=+31.597009147" watchObservedRunningTime="2026-01-20 01:35:53.558845533 +0000 UTC m=+33.547218674" Jan 20 01:35:55.277022 kubelet[2683]: E0120 01:35:55.276383 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:35:56.667408 systemd[1]: Started sshd@13-10.230.15.2:22-134.209.94.87:56480.service - OpenSSH per-connection server daemon (134.209.94.87:56480). Jan 20 01:35:56.872022 sshd[3426]: Connection closed by authenticating user root 134.209.94.87 port 56480 [preauth] Jan 20 01:35:56.874032 systemd[1]: sshd@13-10.230.15.2:22-134.209.94.87:56480.service: Deactivated successfully. Jan 20 01:35:57.275621 kubelet[2683]: E0120 01:35:57.275530 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:35:58.766276 containerd[1504]: time="2026-01-20T01:35:58.766142341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:58.767966 containerd[1504]: time="2026-01-20T01:35:58.767819757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 20 01:35:58.769071 containerd[1504]: time="2026-01-20T01:35:58.769011451Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:58.773585 containerd[1504]: time="2026-01-20T01:35:58.773503763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:35:58.774696 containerd[1504]: time="2026-01-20T01:35:58.774652321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.24667623s" Jan 20 01:35:58.774784 containerd[1504]: time="2026-01-20T01:35:58.774701436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 01:35:58.783612 containerd[1504]: time="2026-01-20T01:35:58.783494672Z" level=info msg="CreateContainer within sandbox \"06689a5296ed8811d0c0f1f6a7cefc89f5eecc1e8b760b412353818d5f0cbc1d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 01:35:58.806168 containerd[1504]: time="2026-01-20T01:35:58.806110633Z" level=info msg="CreateContainer within sandbox \"06689a5296ed8811d0c0f1f6a7cefc89f5eecc1e8b760b412353818d5f0cbc1d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"70aa25de62bc82b725f555d70e6938ec10983d82aeaa1b1e93fce88a81eaa2a8\"" Jan 20 01:35:58.807281 containerd[1504]: time="2026-01-20T01:35:58.807057101Z" level=info msg="StartContainer for \"70aa25de62bc82b725f555d70e6938ec10983d82aeaa1b1e93fce88a81eaa2a8\"" Jan 20 01:35:58.885243 systemd[1]: Started cri-containerd-70aa25de62bc82b725f555d70e6938ec10983d82aeaa1b1e93fce88a81eaa2a8.scope - libcontainer container 70aa25de62bc82b725f555d70e6938ec10983d82aeaa1b1e93fce88a81eaa2a8. Jan 20 01:35:58.940718 containerd[1504]: time="2026-01-20T01:35:58.940652103Z" level=info msg="StartContainer for \"70aa25de62bc82b725f555d70e6938ec10983d82aeaa1b1e93fce88a81eaa2a8\" returns successfully" Jan 20 01:35:59.275606 kubelet[2683]: E0120 01:35:59.275481 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:36:00.131342 systemd[1]: cri-containerd-70aa25de62bc82b725f555d70e6938ec10983d82aeaa1b1e93fce88a81eaa2a8.scope: Deactivated successfully. Jan 20 01:36:00.198724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70aa25de62bc82b725f555d70e6938ec10983d82aeaa1b1e93fce88a81eaa2a8-rootfs.mount: Deactivated successfully. Jan 20 01:36:00.204907 containerd[1504]: time="2026-01-20T01:36:00.204773289Z" level=info msg="shim disconnected" id=70aa25de62bc82b725f555d70e6938ec10983d82aeaa1b1e93fce88a81eaa2a8 namespace=k8s.io Jan 20 01:36:00.205737 containerd[1504]: time="2026-01-20T01:36:00.204907540Z" level=warning msg="cleaning up after shim disconnected" id=70aa25de62bc82b725f555d70e6938ec10983d82aeaa1b1e93fce88a81eaa2a8 namespace=k8s.io Jan 20 01:36:00.205737 containerd[1504]: time="2026-01-20T01:36:00.204925173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:36:00.226963 kubelet[2683]: I0120 01:36:00.226604 2683 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 20 01:36:00.312914 systemd[1]: Created slice kubepods-burstable-podfa1696f7_a972_4ca4_8cc6_bdf816751e94.slice - libcontainer container kubepods-burstable-podfa1696f7_a972_4ca4_8cc6_bdf816751e94.slice. Jan 20 01:36:00.333492 systemd[1]: Created slice kubepods-besteffort-pod06135b5f_1f1c_49a1_be4c_0a62e543b91b.slice - libcontainer container kubepods-besteffort-pod06135b5f_1f1c_49a1_be4c_0a62e543b91b.slice. Jan 20 01:36:00.350447 systemd[1]: Created slice kubepods-besteffort-pod72e0069f_0dfe_458b_8762_abad903cdba3.slice - libcontainer container kubepods-besteffort-pod72e0069f_0dfe_458b_8762_abad903cdba3.slice. Jan 20 01:36:00.368877 systemd[1]: Created slice kubepods-besteffort-pod9e8ed20d_7ae4_416a_a5ca_28bbd455038b.slice - libcontainer container kubepods-besteffort-pod9e8ed20d_7ae4_416a_a5ca_28bbd455038b.slice. Jan 20 01:36:00.382039 systemd[1]: Created slice kubepods-besteffort-pod2652f767_bf33_49f7_b353_182252d33510.slice - libcontainer container kubepods-besteffort-pod2652f767_bf33_49f7_b353_182252d33510.slice. Jan 20 01:36:00.395234 systemd[1]: Created slice kubepods-besteffort-pod708249e2_7049_4ff6_8bf2_b94a10ee1bca.slice - libcontainer container kubepods-besteffort-pod708249e2_7049_4ff6_8bf2_b94a10ee1bca.slice. Jan 20 01:36:00.400283 kubelet[2683]: I0120 01:36:00.400241 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkttl\" (UniqueName: \"kubernetes.io/projected/2652f767-bf33-49f7-b353-182252d33510-kube-api-access-zkttl\") pod \"goldmane-7c778bb748-x6hg8\" (UID: \"2652f767-bf33-49f7-b353-182252d33510\") " pod="calico-system/goldmane-7c778bb748-x6hg8" Jan 20 01:36:00.400784 kubelet[2683]: I0120 01:36:00.400314 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72e0069f-0dfe-458b-8762-abad903cdba3-calico-apiserver-certs\") pod \"calico-apiserver-c6469cbc-qrwh4\" (UID: \"72e0069f-0dfe-458b-8762-abad903cdba3\") " pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" Jan 20 01:36:00.400784 kubelet[2683]: I0120 01:36:00.400389 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2652f767-bf33-49f7-b353-182252d33510-config\") pod \"goldmane-7c778bb748-x6hg8\" (UID: \"2652f767-bf33-49f7-b353-182252d33510\") " pod="calico-system/goldmane-7c778bb748-x6hg8" Jan 20 01:36:00.400784 kubelet[2683]: I0120 01:36:00.400423 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e8ed20d-7ae4-416a-a5ca-28bbd455038b-calico-apiserver-certs\") pod \"calico-apiserver-c6469cbc-m6w49\" (UID: \"9e8ed20d-7ae4-416a-a5ca-28bbd455038b\") " pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" Jan 20 01:36:00.400784 kubelet[2683]: I0120 01:36:00.400492 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jz78\" (UniqueName: \"kubernetes.io/projected/9e8ed20d-7ae4-416a-a5ca-28bbd455038b-kube-api-access-6jz78\") pod \"calico-apiserver-c6469cbc-m6w49\" (UID: \"9e8ed20d-7ae4-416a-a5ca-28bbd455038b\") " pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" Jan 20 01:36:00.400784 kubelet[2683]: I0120 01:36:00.400547 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/708249e2-7049-4ff6-8bf2-b94a10ee1bca-tigera-ca-bundle\") pod \"calico-kube-controllers-7d65cdbcf4-xqqft\" (UID: \"708249e2-7049-4ff6-8bf2-b94a10ee1bca\") " pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" Jan 20 01:36:00.405179 kubelet[2683]: I0120 01:36:00.400591 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8lt\" (UniqueName: \"kubernetes.io/projected/fa1696f7-a972-4ca4-8cc6-bdf816751e94-kube-api-access-zg8lt\") pod \"coredns-66bc5c9577-wkmq8\" (UID: \"fa1696f7-a972-4ca4-8cc6-bdf816751e94\") " pod="kube-system/coredns-66bc5c9577-wkmq8" Jan 20 01:36:00.405179 kubelet[2683]: I0120 01:36:00.400772 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06135b5f-1f1c-49a1-be4c-0a62e543b91b-whisker-backend-key-pair\") pod \"whisker-6ffbdbf646-685v4\" (UID: \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\") " pod="calico-system/whisker-6ffbdbf646-685v4" Jan 20 01:36:00.405179 kubelet[2683]: I0120 01:36:00.401612 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06135b5f-1f1c-49a1-be4c-0a62e543b91b-whisker-ca-bundle\") pod \"whisker-6ffbdbf646-685v4\" (UID: \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\") " pod="calico-system/whisker-6ffbdbf646-685v4" Jan 20 01:36:00.405179 kubelet[2683]: I0120 01:36:00.401683 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa1696f7-a972-4ca4-8cc6-bdf816751e94-config-volume\") pod \"coredns-66bc5c9577-wkmq8\" (UID: \"fa1696f7-a972-4ca4-8cc6-bdf816751e94\") " pod="kube-system/coredns-66bc5c9577-wkmq8" Jan 20 01:36:00.405179 kubelet[2683]: I0120 01:36:00.401739 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn8p2\" (UniqueName: \"kubernetes.io/projected/06135b5f-1f1c-49a1-be4c-0a62e543b91b-kube-api-access-sn8p2\") pod \"whisker-6ffbdbf646-685v4\" (UID: \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\") " pod="calico-system/whisker-6ffbdbf646-685v4" Jan 20 01:36:00.405476 kubelet[2683]: I0120 01:36:00.401793 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh7sr\" (UniqueName: \"kubernetes.io/projected/708249e2-7049-4ff6-8bf2-b94a10ee1bca-kube-api-access-rh7sr\") pod \"calico-kube-controllers-7d65cdbcf4-xqqft\" (UID: \"708249e2-7049-4ff6-8bf2-b94a10ee1bca\") " pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" Jan 20 01:36:00.405476 kubelet[2683]: I0120 01:36:00.401847 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a922b510-6dd4-4211-8d8f-a8df2985776c-config-volume\") pod \"coredns-66bc5c9577-nlp7d\" (UID: \"a922b510-6dd4-4211-8d8f-a8df2985776c\") " pod="kube-system/coredns-66bc5c9577-nlp7d" Jan 20 01:36:00.405476 kubelet[2683]: I0120 01:36:00.401918 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2652f767-bf33-49f7-b353-182252d33510-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-x6hg8\" (UID: \"2652f767-bf33-49f7-b353-182252d33510\") " pod="calico-system/goldmane-7c778bb748-x6hg8" Jan 20 01:36:00.405476 kubelet[2683]: I0120 01:36:00.401990 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mx4b\" (UniqueName: \"kubernetes.io/projected/72e0069f-0dfe-458b-8762-abad903cdba3-kube-api-access-7mx4b\") pod \"calico-apiserver-c6469cbc-qrwh4\" (UID: \"72e0069f-0dfe-458b-8762-abad903cdba3\") " pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" Jan 20 01:36:00.405476 kubelet[2683]: I0120 01:36:00.402231 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhknp\" (UniqueName: \"kubernetes.io/projected/a922b510-6dd4-4211-8d8f-a8df2985776c-kube-api-access-lhknp\") pod \"coredns-66bc5c9577-nlp7d\" (UID: \"a922b510-6dd4-4211-8d8f-a8df2985776c\") " pod="kube-system/coredns-66bc5c9577-nlp7d" Jan 20 01:36:00.407710 kubelet[2683]: I0120 01:36:00.402646 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2652f767-bf33-49f7-b353-182252d33510-goldmane-key-pair\") pod \"goldmane-7c778bb748-x6hg8\" (UID: \"2652f767-bf33-49f7-b353-182252d33510\") " pod="calico-system/goldmane-7c778bb748-x6hg8" Jan 20 01:36:00.408028 systemd[1]: Created slice kubepods-burstable-poda922b510_6dd4_4211_8d8f_a8df2985776c.slice - libcontainer container kubepods-burstable-poda922b510_6dd4_4211_8d8f_a8df2985776c.slice. Jan 20 01:36:00.622043 containerd[1504]: time="2026-01-20T01:36:00.620266657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 01:36:00.633473 containerd[1504]: time="2026-01-20T01:36:00.632603969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wkmq8,Uid:fa1696f7-a972-4ca4-8cc6-bdf816751e94,Namespace:kube-system,Attempt:0,}" Jan 20 01:36:00.644360 containerd[1504]: time="2026-01-20T01:36:00.644291702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ffbdbf646-685v4,Uid:06135b5f-1f1c-49a1-be4c-0a62e543b91b,Namespace:calico-system,Attempt:0,}" Jan 20 01:36:00.679103 containerd[1504]: time="2026-01-20T01:36:00.679031541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6469cbc-qrwh4,Uid:72e0069f-0dfe-458b-8762-abad903cdba3,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:36:00.680328 containerd[1504]: time="2026-01-20T01:36:00.680009613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6469cbc-m6w49,Uid:9e8ed20d-7ae4-416a-a5ca-28bbd455038b,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:36:00.708474 containerd[1504]: time="2026-01-20T01:36:00.708411285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d65cdbcf4-xqqft,Uid:708249e2-7049-4ff6-8bf2-b94a10ee1bca,Namespace:calico-system,Attempt:0,}" Jan 20 01:36:00.708837 containerd[1504]: time="2026-01-20T01:36:00.708778271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-x6hg8,Uid:2652f767-bf33-49f7-b353-182252d33510,Namespace:calico-system,Attempt:0,}" Jan 20 01:36:00.727415 containerd[1504]: time="2026-01-20T01:36:00.726575229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nlp7d,Uid:a922b510-6dd4-4211-8d8f-a8df2985776c,Namespace:kube-system,Attempt:0,}" Jan 20 01:36:01.214009 containerd[1504]: time="2026-01-20T01:36:01.211896576Z" level=error msg="Failed to destroy network for sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.238723 containerd[1504]: time="2026-01-20T01:36:01.238243595Z" level=error msg="encountered an error cleaning up failed sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.238723 containerd[1504]: time="2026-01-20T01:36:01.238422317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nlp7d,Uid:a922b510-6dd4-4211-8d8f-a8df2985776c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.240183 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824-shm.mount: Deactivated successfully. Jan 20 01:36:01.249368 containerd[1504]: time="2026-01-20T01:36:01.245577452Z" level=error msg="Failed to destroy network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.249368 containerd[1504]: time="2026-01-20T01:36:01.248660949Z" level=error msg="encountered an error cleaning up failed sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.249368 containerd[1504]: time="2026-01-20T01:36:01.248743973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6469cbc-qrwh4,Uid:72e0069f-0dfe-458b-8762-abad903cdba3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.251093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b-shm.mount: Deactivated successfully. Jan 20 01:36:01.261245 kubelet[2683]: E0120 01:36:01.261118 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.261646 kubelet[2683]: E0120 01:36:01.261428 2683 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" Jan 20 01:36:01.261646 kubelet[2683]: E0120 01:36:01.261487 2683 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" Jan 20 01:36:01.261793 kubelet[2683]: E0120 01:36:01.261618 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6469cbc-qrwh4_calico-apiserver(72e0069f-0dfe-458b-8762-abad903cdba3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6469cbc-qrwh4_calico-apiserver(72e0069f-0dfe-458b-8762-abad903cdba3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:36:01.263470 kubelet[2683]: E0120 01:36:01.263386 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.263568 kubelet[2683]: E0120 01:36:01.263480 2683 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nlp7d" Jan 20 01:36:01.263655 kubelet[2683]: E0120 01:36:01.263622 2683 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nlp7d" Jan 20 01:36:01.263749 kubelet[2683]: E0120 01:36:01.263709 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nlp7d_kube-system(a922b510-6dd4-4211-8d8f-a8df2985776c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nlp7d_kube-system(a922b510-6dd4-4211-8d8f-a8df2985776c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nlp7d" podUID="a922b510-6dd4-4211-8d8f-a8df2985776c" Jan 20 01:36:01.266096 containerd[1504]: time="2026-01-20T01:36:01.265829840Z" level=error msg="Failed to destroy network for sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.271889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2-shm.mount: Deactivated successfully. Jan 20 01:36:01.278136 containerd[1504]: time="2026-01-20T01:36:01.274777862Z" level=error msg="encountered an error cleaning up failed sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.278136 containerd[1504]: time="2026-01-20T01:36:01.276979627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wkmq8,Uid:fa1696f7-a972-4ca4-8cc6-bdf816751e94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.279767 kubelet[2683]: E0120 01:36:01.278705 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.279767 kubelet[2683]: E0120 01:36:01.278799 2683 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wkmq8" Jan 20 01:36:01.279767 kubelet[2683]: E0120 01:36:01.278828 2683 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wkmq8" Jan 20 01:36:01.283819 kubelet[2683]: E0120 01:36:01.278919 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wkmq8_kube-system(fa1696f7-a972-4ca4-8cc6-bdf816751e94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wkmq8_kube-system(fa1696f7-a972-4ca4-8cc6-bdf816751e94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wkmq8" podUID="fa1696f7-a972-4ca4-8cc6-bdf816751e94" Jan 20 01:36:01.299085 containerd[1504]: time="2026-01-20T01:36:01.298613413Z" level=error msg="Failed to destroy network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.300588 containerd[1504]: time="2026-01-20T01:36:01.300541305Z" level=error msg="encountered an error cleaning up failed sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.300704 containerd[1504]: time="2026-01-20T01:36:01.300650061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6469cbc-m6w49,Uid:9e8ed20d-7ae4-416a-a5ca-28bbd455038b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.302537 systemd[1]: Created slice kubepods-besteffort-podfbc3977f_2a7c_42f2_a24b_94a3c5a0bac9.slice - libcontainer container kubepods-besteffort-podfbc3977f_2a7c_42f2_a24b_94a3c5a0bac9.slice. Jan 20 01:36:01.311110 kubelet[2683]: E0120 01:36:01.304504 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.311110 kubelet[2683]: E0120 01:36:01.304631 2683 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" Jan 20 01:36:01.311110 kubelet[2683]: E0120 01:36:01.304715 2683 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" Jan 20 01:36:01.311358 containerd[1504]: time="2026-01-20T01:36:01.308072763Z" level=error msg="Failed to destroy network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.311358 containerd[1504]: time="2026-01-20T01:36:01.309078622Z" level=error msg="encountered an error cleaning up failed sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.311358 containerd[1504]: time="2026-01-20T01:36:01.309191092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-x6hg8,Uid:2652f767-bf33-49f7-b353-182252d33510,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.309184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231-shm.mount: Deactivated successfully. Jan 20 01:36:01.311716 kubelet[2683]: E0120 01:36:01.304930 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6469cbc-m6w49_calico-apiserver(9e8ed20d-7ae4-416a-a5ca-28bbd455038b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6469cbc-m6w49_calico-apiserver(9e8ed20d-7ae4-416a-a5ca-28bbd455038b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:36:01.316063 kubelet[2683]: E0120 01:36:01.313517 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.316063 kubelet[2683]: E0120 01:36:01.315430 2683 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-x6hg8" Jan 20 01:36:01.316063 kubelet[2683]: E0120 01:36:01.315469 2683 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-x6hg8" Jan 20 01:36:01.316564 kubelet[2683]: E0120 01:36:01.315546 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-x6hg8_calico-system(2652f767-bf33-49f7-b353-182252d33510)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-x6hg8_calico-system(2652f767-bf33-49f7-b353-182252d33510)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:36:01.344996 containerd[1504]: time="2026-01-20T01:36:01.343016436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdqf6,Uid:fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9,Namespace:calico-system,Attempt:0,}" Jan 20 01:36:01.345834 containerd[1504]: time="2026-01-20T01:36:01.345368135Z" level=error msg="Failed to destroy network for sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.346739 containerd[1504]: time="2026-01-20T01:36:01.346661442Z" level=error msg="encountered an error cleaning up failed sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.347017 containerd[1504]: time="2026-01-20T01:36:01.346928672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ffbdbf646-685v4,Uid:06135b5f-1f1c-49a1-be4c-0a62e543b91b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.347540 kubelet[2683]: E0120 01:36:01.347487 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.347739 kubelet[2683]: E0120 01:36:01.347705 2683 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ffbdbf646-685v4" Jan 20 01:36:01.347878 kubelet[2683]: E0120 01:36:01.347848 2683 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ffbdbf646-685v4" Jan 20 01:36:01.348137 kubelet[2683]: E0120 01:36:01.348095 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6ffbdbf646-685v4_calico-system(06135b5f-1f1c-49a1-be4c-0a62e543b91b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6ffbdbf646-685v4_calico-system(06135b5f-1f1c-49a1-be4c-0a62e543b91b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ffbdbf646-685v4" podUID="06135b5f-1f1c-49a1-be4c-0a62e543b91b" Jan 20 01:36:01.350704 containerd[1504]: time="2026-01-20T01:36:01.350653395Z" level=error msg="Failed to destroy network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.351415 containerd[1504]: time="2026-01-20T01:36:01.351375017Z" level=error msg="encountered an error cleaning up failed sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.351640 containerd[1504]: time="2026-01-20T01:36:01.351551729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d65cdbcf4-xqqft,Uid:708249e2-7049-4ff6-8bf2-b94a10ee1bca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.352770 kubelet[2683]: E0120 01:36:01.352131 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.352770 kubelet[2683]: E0120 01:36:01.352197 2683 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" Jan 20 01:36:01.352770 kubelet[2683]: E0120 01:36:01.352230 2683 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" Jan 20 01:36:01.353302 kubelet[2683]: E0120 01:36:01.352285 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d65cdbcf4-xqqft_calico-system(708249e2-7049-4ff6-8bf2-b94a10ee1bca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d65cdbcf4-xqqft_calico-system(708249e2-7049-4ff6-8bf2-b94a10ee1bca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:36:01.465379 containerd[1504]: time="2026-01-20T01:36:01.465022861Z" level=error msg="Failed to destroy network for sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.466754 containerd[1504]: time="2026-01-20T01:36:01.466358978Z" level=error msg="encountered an error cleaning up failed sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.466754 containerd[1504]: time="2026-01-20T01:36:01.466466713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdqf6,Uid:fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.467863 kubelet[2683]: E0120 01:36:01.467123 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.467863 kubelet[2683]: E0120 01:36:01.467227 2683 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wdqf6" Jan 20 01:36:01.467863 kubelet[2683]: E0120 01:36:01.467257 2683 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wdqf6" Jan 20 01:36:01.470686 kubelet[2683]: E0120 01:36:01.467379 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wdqf6_calico-system(fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wdqf6_calico-system(fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:36:01.623073 kubelet[2683]: I0120 01:36:01.622358 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:01.627443 kubelet[2683]: I0120 01:36:01.627048 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:01.634885 kubelet[2683]: I0120 01:36:01.634844 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:36:01.639620 kubelet[2683]: I0120 01:36:01.639466 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:36:01.644577 kubelet[2683]: I0120 01:36:01.644541 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:36:01.652284 containerd[1504]: time="2026-01-20T01:36:01.651788792Z" level=info msg="StopPodSandbox for \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\"" Jan 20 01:36:01.652284 containerd[1504]: time="2026-01-20T01:36:01.651855295Z" level=info msg="StopPodSandbox for \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\"" Jan 20 01:36:01.653609 containerd[1504]: time="2026-01-20T01:36:01.653529243Z" level=info msg="Ensure that sandbox 921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c in task-service has been cleanup successfully" Jan 20 01:36:01.654037 containerd[1504]: time="2026-01-20T01:36:01.653978752Z" level=info msg="StopPodSandbox for \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\"" Jan 20 01:36:01.654886 containerd[1504]: time="2026-01-20T01:36:01.654833234Z" level=info msg="Ensure that sandbox 9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231 in task-service has been cleanup successfully" Jan 20 01:36:01.659564 containerd[1504]: time="2026-01-20T01:36:01.653532350Z" level=info msg="Ensure that sandbox 45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009 in task-service has been cleanup successfully" Jan 20 01:36:01.660517 containerd[1504]: time="2026-01-20T01:36:01.651816536Z" level=info msg="StopPodSandbox for \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\"" Jan 20 01:36:01.660517 containerd[1504]: time="2026-01-20T01:36:01.660202416Z" level=info msg="Ensure that sandbox e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824 in task-service has been cleanup successfully" Jan 20 01:36:01.662887 containerd[1504]: time="2026-01-20T01:36:01.653541264Z" level=info msg="StopPodSandbox for \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\"" Jan 20 01:36:01.663189 containerd[1504]: time="2026-01-20T01:36:01.663145232Z" level=info msg="Ensure that sandbox 01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0 in task-service has been cleanup successfully" Jan 20 01:36:01.667455 kubelet[2683]: I0120 01:36:01.666648 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:01.668477 containerd[1504]: time="2026-01-20T01:36:01.668443187Z" level=info msg="StopPodSandbox for \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\"" Jan 20 01:36:01.672380 containerd[1504]: time="2026-01-20T01:36:01.672327529Z" level=info msg="Ensure that sandbox a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2 in task-service has been cleanup successfully" Jan 20 01:36:01.689282 kubelet[2683]: I0120 01:36:01.689233 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:36:01.698602 containerd[1504]: time="2026-01-20T01:36:01.698551439Z" level=info msg="StopPodSandbox for \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\"" Jan 20 01:36:01.702172 containerd[1504]: time="2026-01-20T01:36:01.702123429Z" level=info msg="Ensure that sandbox 83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b in task-service has been cleanup successfully" Jan 20 01:36:01.712247 kubelet[2683]: I0120 01:36:01.711397 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:01.717195 containerd[1504]: time="2026-01-20T01:36:01.717062518Z" level=info msg="StopPodSandbox for \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\"" Jan 20 01:36:01.718054 containerd[1504]: time="2026-01-20T01:36:01.717613784Z" level=info msg="Ensure that sandbox 4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e in task-service has been cleanup successfully" Jan 20 01:36:01.799431 containerd[1504]: time="2026-01-20T01:36:01.799331304Z" level=error msg="StopPodSandbox for \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\" failed" error="failed to destroy network for sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.799831 kubelet[2683]: E0120 01:36:01.799775 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:01.820752 kubelet[2683]: E0120 01:36:01.799889 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009"} Jan 20 01:36:01.820752 kubelet[2683]: E0120 01:36:01.819981 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:01.820752 kubelet[2683]: E0120 01:36:01.820035 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ffbdbf646-685v4" podUID="06135b5f-1f1c-49a1-be4c-0a62e543b91b" Jan 20 01:36:01.833861 containerd[1504]: time="2026-01-20T01:36:01.833737164Z" level=error msg="StopPodSandbox for \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\" failed" error="failed to destroy network for sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.835228 kubelet[2683]: E0120 01:36:01.835159 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:01.835519 kubelet[2683]: E0120 01:36:01.835242 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2"} Jan 20 01:36:01.835519 kubelet[2683]: E0120 01:36:01.835294 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa1696f7-a972-4ca4-8cc6-bdf816751e94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:01.835519 kubelet[2683]: E0120 01:36:01.835363 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa1696f7-a972-4ca4-8cc6-bdf816751e94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wkmq8" podUID="fa1696f7-a972-4ca4-8cc6-bdf816751e94" Jan 20 01:36:01.863345 containerd[1504]: time="2026-01-20T01:36:01.862646379Z" level=error msg="StopPodSandbox for \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\" failed" error="failed to destroy network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.863555 kubelet[2683]: E0120 01:36:01.863191 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:36:01.864429 kubelet[2683]: E0120 01:36:01.863265 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231"} Jan 20 01:36:01.864429 kubelet[2683]: E0120 01:36:01.864091 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e8ed20d-7ae4-416a-a5ca-28bbd455038b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:01.864429 kubelet[2683]: E0120 01:36:01.864251 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e8ed20d-7ae4-416a-a5ca-28bbd455038b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:36:01.883130 containerd[1504]: time="2026-01-20T01:36:01.881484461Z" level=error msg="StopPodSandbox for \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\" failed" error="failed to destroy network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.883318 kubelet[2683]: E0120 01:36:01.881897 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:36:01.883318 kubelet[2683]: E0120 01:36:01.882022 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0"} Jan 20 01:36:01.883318 kubelet[2683]: E0120 01:36:01.882103 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"708249e2-7049-4ff6-8bf2-b94a10ee1bca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:01.883318 kubelet[2683]: E0120 01:36:01.883039 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"708249e2-7049-4ff6-8bf2-b94a10ee1bca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:36:01.883855 containerd[1504]: time="2026-01-20T01:36:01.883817440Z" level=error msg="StopPodSandbox for \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\" failed" error="failed to destroy network for sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.884403 kubelet[2683]: E0120 01:36:01.884340 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:01.884645 kubelet[2683]: E0120 01:36:01.884420 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824"} Jan 20 01:36:01.884645 kubelet[2683]: E0120 01:36:01.884472 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a922b510-6dd4-4211-8d8f-a8df2985776c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:01.884645 kubelet[2683]: E0120 01:36:01.884515 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a922b510-6dd4-4211-8d8f-a8df2985776c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nlp7d" podUID="a922b510-6dd4-4211-8d8f-a8df2985776c" Jan 20 01:36:01.889625 containerd[1504]: time="2026-01-20T01:36:01.889534915Z" level=error msg="StopPodSandbox for \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\" failed" error="failed to destroy network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.890517 kubelet[2683]: E0120 01:36:01.890067 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:36:01.890517 kubelet[2683]: E0120 01:36:01.890153 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c"} Jan 20 01:36:01.890517 kubelet[2683]: E0120 01:36:01.890237 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2652f767-bf33-49f7-b353-182252d33510\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:01.890517 kubelet[2683]: E0120 01:36:01.890281 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2652f767-bf33-49f7-b353-182252d33510\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:36:01.906362 containerd[1504]: time="2026-01-20T01:36:01.906257293Z" level=error msg="StopPodSandbox for \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\" failed" error="failed to destroy network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.906814 kubelet[2683]: E0120 01:36:01.906679 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:36:01.906814 kubelet[2683]: E0120 01:36:01.906756 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b"} Jan 20 01:36:01.908011 kubelet[2683]: E0120 01:36:01.906803 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72e0069f-0dfe-458b-8762-abad903cdba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:01.908011 kubelet[2683]: E0120 01:36:01.906877 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72e0069f-0dfe-458b-8762-abad903cdba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:36:01.911045 containerd[1504]: time="2026-01-20T01:36:01.910983711Z" level=error msg="StopPodSandbox for \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\" failed" error="failed to destroy network for sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:01.911835 kubelet[2683]: E0120 01:36:01.911600 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:01.911835 kubelet[2683]: E0120 01:36:01.911685 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e"} Jan 20 01:36:01.911835 kubelet[2683]: E0120 01:36:01.911744 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:01.911835 kubelet[2683]: E0120 01:36:01.911788 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:36:02.198879 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c-shm.mount: Deactivated successfully. Jan 20 01:36:02.199336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0-shm.mount: Deactivated successfully. Jan 20 01:36:02.199451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009-shm.mount: Deactivated successfully. Jan 20 01:36:03.176539 kubelet[2683]: I0120 01:36:03.176462 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:36:09.921369 systemd[1]: Started sshd@14-10.230.15.2:22-152.42.141.173:51700.service - OpenSSH per-connection server daemon (152.42.141.173:51700). Jan 20 01:36:11.687852 sshd[3837]: Connection closed by authenticating user root 152.42.141.173 port 51700 [preauth] Jan 20 01:36:11.696964 systemd[1]: sshd@14-10.230.15.2:22-152.42.141.173:51700.service: Deactivated successfully. Jan 20 01:36:12.440728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518487414.mount: Deactivated successfully. Jan 20 01:36:12.597500 containerd[1504]: time="2026-01-20T01:36:12.580463146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 20 01:36:12.598673 containerd[1504]: time="2026-01-20T01:36:12.597587246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:36:12.656046 containerd[1504]: time="2026-01-20T01:36:12.655306737Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:36:12.657669 containerd[1504]: time="2026-01-20T01:36:12.657604772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.037132417s" Jan 20 01:36:12.657766 containerd[1504]: time="2026-01-20T01:36:12.657670290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 01:36:12.658213 containerd[1504]: time="2026-01-20T01:36:12.658006044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:36:12.742931 containerd[1504]: time="2026-01-20T01:36:12.742355167Z" level=info msg="CreateContainer within sandbox \"06689a5296ed8811d0c0f1f6a7cefc89f5eecc1e8b760b412353818d5f0cbc1d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 01:36:12.816810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274066939.mount: Deactivated successfully. Jan 20 01:36:12.831144 containerd[1504]: time="2026-01-20T01:36:12.831039448Z" level=info msg="CreateContainer within sandbox \"06689a5296ed8811d0c0f1f6a7cefc89f5eecc1e8b760b412353818d5f0cbc1d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"217ae75a45a34f0d6ef08397025aba8335653e13cad69937e92f07535188a805\"" Jan 20 01:36:12.841995 containerd[1504]: time="2026-01-20T01:36:12.839995715Z" level=info msg="StartContainer for \"217ae75a45a34f0d6ef08397025aba8335653e13cad69937e92f07535188a805\"" Jan 20 01:36:13.032269 systemd[1]: Started cri-containerd-217ae75a45a34f0d6ef08397025aba8335653e13cad69937e92f07535188a805.scope - libcontainer container 217ae75a45a34f0d6ef08397025aba8335653e13cad69937e92f07535188a805. Jan 20 01:36:13.121689 containerd[1504]: time="2026-01-20T01:36:13.121635479Z" level=info msg="StartContainer for \"217ae75a45a34f0d6ef08397025aba8335653e13cad69937e92f07535188a805\" returns successfully" Jan 20 01:36:13.279436 containerd[1504]: time="2026-01-20T01:36:13.278855259Z" level=info msg="StopPodSandbox for \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\"" Jan 20 01:36:13.283122 containerd[1504]: time="2026-01-20T01:36:13.280293672Z" level=info msg="StopPodSandbox for \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\"" Jan 20 01:36:13.287443 containerd[1504]: time="2026-01-20T01:36:13.287398686Z" level=info msg="StopPodSandbox for \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\"" Jan 20 01:36:13.288391 containerd[1504]: time="2026-01-20T01:36:13.288088999Z" level=info msg="StopPodSandbox for \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\"" Jan 20 01:36:13.491892 containerd[1504]: time="2026-01-20T01:36:13.491055602Z" level=error msg="StopPodSandbox for \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\" failed" error="failed to destroy network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:13.491892 containerd[1504]: time="2026-01-20T01:36:13.491187002Z" level=error msg="StopPodSandbox for \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\" failed" error="failed to destroy network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:13.493096 kubelet[2683]: E0120 01:36:13.491523 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:36:13.493096 kubelet[2683]: E0120 01:36:13.491669 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231"} Jan 20 01:36:13.493096 kubelet[2683]: E0120 01:36:13.491752 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e8ed20d-7ae4-416a-a5ca-28bbd455038b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:13.493096 kubelet[2683]: E0120 01:36:13.491523 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:36:13.493096 kubelet[2683]: E0120 01:36:13.491825 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0"} Jan 20 01:36:13.497732 kubelet[2683]: E0120 01:36:13.491818 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e8ed20d-7ae4-416a-a5ca-28bbd455038b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:36:13.497732 kubelet[2683]: E0120 01:36:13.491895 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"708249e2-7049-4ff6-8bf2-b94a10ee1bca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:13.497732 kubelet[2683]: E0120 01:36:13.491954 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"708249e2-7049-4ff6-8bf2-b94a10ee1bca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:36:13.499334 containerd[1504]: time="2026-01-20T01:36:13.495902518Z" level=error msg="StopPodSandbox for \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\" failed" error="failed to destroy network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:13.499334 containerd[1504]: time="2026-01-20T01:36:13.497013230Z" level=error msg="StopPodSandbox for \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\" failed" error="failed to destroy network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:36:13.499447 kubelet[2683]: E0120 01:36:13.498638 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:36:13.499447 kubelet[2683]: E0120 01:36:13.498639 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:36:13.499447 kubelet[2683]: E0120 01:36:13.498699 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c"} Jan 20 01:36:13.499447 kubelet[2683]: E0120 01:36:13.498706 2683 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b"} Jan 20 01:36:13.499447 kubelet[2683]: E0120 01:36:13.498755 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2652f767-bf33-49f7-b353-182252d33510\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:13.499753 kubelet[2683]: E0120 01:36:13.498758 2683 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72e0069f-0dfe-458b-8762-abad903cdba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:36:13.499753 kubelet[2683]: E0120 01:36:13.498801 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2652f767-bf33-49f7-b353-182252d33510\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:36:13.499753 kubelet[2683]: E0120 01:36:13.498812 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72e0069f-0dfe-458b-8762-abad903cdba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:36:13.764422 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 01:36:13.769002 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 01:36:13.922541 kubelet[2683]: I0120 01:36:13.914753 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l79ft" podStartSLOduration=2.7704985239999997 podStartE2EDuration="27.902246216s" podCreationTimestamp="2026-01-20 01:35:46 +0000 UTC" firstStartedPulling="2026-01-20 01:35:47.532480686 +0000 UTC m=+27.520853811" lastFinishedPulling="2026-01-20 01:36:12.664228373 +0000 UTC m=+52.652601503" observedRunningTime="2026-01-20 01:36:13.870566006 +0000 UTC m=+53.858939149" watchObservedRunningTime="2026-01-20 01:36:13.902246216 +0000 UTC m=+53.890619379" Jan 20 01:36:14.143982 containerd[1504]: time="2026-01-20T01:36:14.143871362Z" level=info msg="StopPodSandbox for \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\"" Jan 20 01:36:14.296568 containerd[1504]: time="2026-01-20T01:36:14.296045571Z" level=info msg="StopPodSandbox for \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\"" Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.268 [INFO][3980] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.271 [INFO][3980] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" iface="eth0" netns="/var/run/netns/cni-8c3bee82-d927-9c4d-6949-a2eb04302a59" Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.272 [INFO][3980] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" iface="eth0" netns="/var/run/netns/cni-8c3bee82-d927-9c4d-6949-a2eb04302a59" Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.275 [INFO][3980] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" iface="eth0" netns="/var/run/netns/cni-8c3bee82-d927-9c4d-6949-a2eb04302a59" Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.275 [INFO][3980] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.275 [INFO][3980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.792 [INFO][3993] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" HandleID="k8s-pod-network.45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.796 [INFO][3993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.796 [INFO][3993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.822 [WARNING][3993] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" HandleID="k8s-pod-network.45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.822 [INFO][3993] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" HandleID="k8s-pod-network.45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.826 [INFO][3993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:14.835141 containerd[1504]: 2026-01-20 01:36:14.829 [INFO][3980] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:14.842215 containerd[1504]: time="2026-01-20T01:36:14.839187067Z" level=info msg="TearDown network for sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\" successfully" Jan 20 01:36:14.842215 containerd[1504]: time="2026-01-20T01:36:14.839270646Z" level=info msg="StopPodSandbox for \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\" returns successfully" Jan 20 01:36:14.841674 systemd[1]: run-netns-cni\x2d8c3bee82\x2dd927\x2d9c4d\x2d6949\x2da2eb04302a59.mount: Deactivated successfully. Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.574 [INFO][4003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.575 [INFO][4003] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" iface="eth0" netns="/var/run/netns/cni-29930e0d-27ad-a6cf-c72c-4389224d1537" Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.575 [INFO][4003] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" iface="eth0" netns="/var/run/netns/cni-29930e0d-27ad-a6cf-c72c-4389224d1537" Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.575 [INFO][4003] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" iface="eth0" netns="/var/run/netns/cni-29930e0d-27ad-a6cf-c72c-4389224d1537" Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.575 [INFO][4003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.575 [INFO][4003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.793 [INFO][4014] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" HandleID="k8s-pod-network.e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.795 [INFO][4014] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.825 [INFO][4014] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.871 [WARNING][4014] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" HandleID="k8s-pod-network.e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.871 [INFO][4014] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" HandleID="k8s-pod-network.e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.876 [INFO][4014] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:14.888605 containerd[1504]: 2026-01-20 01:36:14.882 [INFO][4003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:14.893989 containerd[1504]: time="2026-01-20T01:36:14.893074086Z" level=info msg="TearDown network for sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\" successfully" Jan 20 01:36:14.893989 containerd[1504]: time="2026-01-20T01:36:14.893124433Z" level=info msg="StopPodSandbox for \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\" returns successfully" Jan 20 01:36:14.898178 containerd[1504]: time="2026-01-20T01:36:14.897250869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nlp7d,Uid:a922b510-6dd4-4211-8d8f-a8df2985776c,Namespace:kube-system,Attempt:1,}" Jan 20 01:36:14.897870 systemd[1]: run-netns-cni\x2d29930e0d\x2d27ad\x2da6cf\x2dc72c\x2d4389224d1537.mount: Deactivated successfully. Jan 20 01:36:14.972026 kubelet[2683]: I0120 01:36:14.970325 2683 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06135b5f-1f1c-49a1-be4c-0a62e543b91b-whisker-backend-key-pair\") pod \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\" (UID: \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\") " Jan 20 01:36:14.972826 kubelet[2683]: I0120 01:36:14.972171 2683 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn8p2\" (UniqueName: \"kubernetes.io/projected/06135b5f-1f1c-49a1-be4c-0a62e543b91b-kube-api-access-sn8p2\") pod \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\" (UID: \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\") " Jan 20 01:36:14.972826 kubelet[2683]: I0120 01:36:14.972276 2683 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06135b5f-1f1c-49a1-be4c-0a62e543b91b-whisker-ca-bundle\") pod \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\" (UID: \"06135b5f-1f1c-49a1-be4c-0a62e543b91b\") " Jan 20 01:36:15.047069 kubelet[2683]: I0120 01:36:15.031554 2683 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06135b5f-1f1c-49a1-be4c-0a62e543b91b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "06135b5f-1f1c-49a1-be4c-0a62e543b91b" (UID: "06135b5f-1f1c-49a1-be4c-0a62e543b91b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:36:15.056284 kubelet[2683]: I0120 01:36:15.054755 2683 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06135b5f-1f1c-49a1-be4c-0a62e543b91b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "06135b5f-1f1c-49a1-be4c-0a62e543b91b" (UID: "06135b5f-1f1c-49a1-be4c-0a62e543b91b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 01:36:15.058952 systemd[1]: var-lib-kubelet-pods-06135b5f\x2d1f1c\x2d49a1\x2dbe4c\x2d0a62e543b91b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 01:36:15.081497 kubelet[2683]: I0120 01:36:15.080638 2683 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06135b5f-1f1c-49a1-be4c-0a62e543b91b-kube-api-access-sn8p2" (OuterVolumeSpecName: "kube-api-access-sn8p2") pod "06135b5f-1f1c-49a1-be4c-0a62e543b91b" (UID: "06135b5f-1f1c-49a1-be4c-0a62e543b91b"). InnerVolumeSpecName "kube-api-access-sn8p2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:36:15.095518 kubelet[2683]: I0120 01:36:15.094513 2683 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06135b5f-1f1c-49a1-be4c-0a62e543b91b-whisker-backend-key-pair\") on node \"srv-nmle2.gb1.brightbox.com\" DevicePath \"\"" Jan 20 01:36:15.095518 kubelet[2683]: I0120 01:36:15.094563 2683 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sn8p2\" (UniqueName: \"kubernetes.io/projected/06135b5f-1f1c-49a1-be4c-0a62e543b91b-kube-api-access-sn8p2\") on node \"srv-nmle2.gb1.brightbox.com\" DevicePath \"\"" Jan 20 01:36:15.095518 kubelet[2683]: I0120 01:36:15.094583 2683 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06135b5f-1f1c-49a1-be4c-0a62e543b91b-whisker-ca-bundle\") on node \"srv-nmle2.gb1.brightbox.com\" DevicePath \"\"" Jan 20 01:36:15.371360 systemd-networkd[1413]: cali2c7211c5b34: Link UP Jan 20 01:36:15.371772 systemd-networkd[1413]: cali2c7211c5b34: Gained carrier Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.078 [INFO][4024] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.132 [INFO][4024] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0 coredns-66bc5c9577- kube-system a922b510-6dd4-4211-8d8f-a8df2985776c 937 0 2026-01-20 01:35:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-nmle2.gb1.brightbox.com coredns-66bc5c9577-nlp7d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2c7211c5b34 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Namespace="kube-system" Pod="coredns-66bc5c9577-nlp7d" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.132 [INFO][4024] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Namespace="kube-system" Pod="coredns-66bc5c9577-nlp7d" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.229 [INFO][4045] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" HandleID="k8s-pod-network.9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.229 [INFO][4045] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" HandleID="k8s-pod-network.9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365140), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-nmle2.gb1.brightbox.com", "pod":"coredns-66bc5c9577-nlp7d", "timestamp":"2026-01-20 01:36:15.229470677 +0000 UTC"}, Hostname:"srv-nmle2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.229 [INFO][4045] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.231 [INFO][4045] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.232 [INFO][4045] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nmle2.gb1.brightbox.com' Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.255 [INFO][4045] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.270 [INFO][4045] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.280 [INFO][4045] ipam/ipam.go 511: Trying affinity for 192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.283 [INFO][4045] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.288 [INFO][4045] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.288 [INFO][4045] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.192/26 handle="k8s-pod-network.9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.293 [INFO][4045] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.306 [INFO][4045] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.192/26 handle="k8s-pod-network.9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.324 [INFO][4045] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.192/26] block=192.168.84.192/26 handle="k8s-pod-network.9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.324 [INFO][4045] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.192/26] handle="k8s-pod-network.9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.324 [INFO][4045] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:15.420729 containerd[1504]: 2026-01-20 01:36:15.324 [INFO][4045] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.192/26] IPv6=[] ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" HandleID="k8s-pod-network.9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:15.429334 containerd[1504]: 2026-01-20 01:36:15.328 [INFO][4024] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Namespace="kube-system" Pod="coredns-66bc5c9577-nlp7d" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a922b510-6dd4-4211-8d8f-a8df2985776c", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-nlp7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c7211c5b34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:15.429334 containerd[1504]: 2026-01-20 01:36:15.328 [INFO][4024] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.192/32] ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Namespace="kube-system" Pod="coredns-66bc5c9577-nlp7d" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:15.429334 containerd[1504]: 2026-01-20 01:36:15.328 [INFO][4024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c7211c5b34 ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Namespace="kube-system" Pod="coredns-66bc5c9577-nlp7d" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:15.429334 containerd[1504]: 2026-01-20 01:36:15.374 [INFO][4024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Namespace="kube-system" Pod="coredns-66bc5c9577-nlp7d" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:15.429334 containerd[1504]: 2026-01-20 01:36:15.381 [INFO][4024] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Namespace="kube-system" Pod="coredns-66bc5c9577-nlp7d" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a922b510-6dd4-4211-8d8f-a8df2985776c", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa", Pod:"coredns-66bc5c9577-nlp7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c7211c5b34", MAC:"1e:ce:f5:39:4e:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:15.429740 containerd[1504]: 2026-01-20 01:36:15.414 [INFO][4024] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa" Namespace="kube-system" Pod="coredns-66bc5c9577-nlp7d" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:15.506837 containerd[1504]: time="2026-01-20T01:36:15.506389678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:36:15.506837 containerd[1504]: time="2026-01-20T01:36:15.506554780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:36:15.506837 containerd[1504]: time="2026-01-20T01:36:15.506587337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:15.506837 containerd[1504]: time="2026-01-20T01:36:15.506768130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:15.561238 systemd[1]: Started cri-containerd-9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa.scope - libcontainer container 9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa. Jan 20 01:36:15.677496 containerd[1504]: time="2026-01-20T01:36:15.677412940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nlp7d,Uid:a922b510-6dd4-4211-8d8f-a8df2985776c,Namespace:kube-system,Attempt:1,} returns sandbox id \"9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa\"" Jan 20 01:36:15.694570 containerd[1504]: time="2026-01-20T01:36:15.694521135Z" level=info msg="CreateContainer within sandbox \"9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:36:15.730885 containerd[1504]: time="2026-01-20T01:36:15.730809771Z" level=info msg="CreateContainer within sandbox \"9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b7bc01c4c8544a51156c813e22780d5f1511a826aa4369cf2b25ddc04c3582f\"" Jan 20 01:36:15.733415 containerd[1504]: time="2026-01-20T01:36:15.732002669Z" level=info msg="StartContainer for \"3b7bc01c4c8544a51156c813e22780d5f1511a826aa4369cf2b25ddc04c3582f\"" Jan 20 01:36:15.778148 systemd[1]: Started cri-containerd-3b7bc01c4c8544a51156c813e22780d5f1511a826aa4369cf2b25ddc04c3582f.scope - libcontainer container 3b7bc01c4c8544a51156c813e22780d5f1511a826aa4369cf2b25ddc04c3582f. Jan 20 01:36:15.845516 systemd[1]: var-lib-kubelet-pods-06135b5f\x2d1f1c\x2d49a1\x2dbe4c\x2d0a62e543b91b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsn8p2.mount: Deactivated successfully. Jan 20 01:36:15.864385 containerd[1504]: time="2026-01-20T01:36:15.864321214Z" level=info msg="StartContainer for \"3b7bc01c4c8544a51156c813e22780d5f1511a826aa4369cf2b25ddc04c3582f\" returns successfully" Jan 20 01:36:15.883353 systemd[1]: Removed slice kubepods-besteffort-pod06135b5f_1f1c_49a1_be4c_0a62e543b91b.slice - libcontainer container kubepods-besteffort-pod06135b5f_1f1c_49a1_be4c_0a62e543b91b.slice. Jan 20 01:36:16.040917 systemd[1]: Created slice kubepods-besteffort-podea506b49_1ce0_4278_a723_d51ad8fec903.slice - libcontainer container kubepods-besteffort-podea506b49_1ce0_4278_a723_d51ad8fec903.slice. Jan 20 01:36:16.109667 kubelet[2683]: I0120 01:36:16.109559 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ea506b49-1ce0-4278-a723-d51ad8fec903-whisker-backend-key-pair\") pod \"whisker-6b6cd5cfd4-psk5g\" (UID: \"ea506b49-1ce0-4278-a723-d51ad8fec903\") " pod="calico-system/whisker-6b6cd5cfd4-psk5g" Jan 20 01:36:16.109667 kubelet[2683]: I0120 01:36:16.109660 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44225\" (UniqueName: \"kubernetes.io/projected/ea506b49-1ce0-4278-a723-d51ad8fec903-kube-api-access-44225\") pod \"whisker-6b6cd5cfd4-psk5g\" (UID: \"ea506b49-1ce0-4278-a723-d51ad8fec903\") " pod="calico-system/whisker-6b6cd5cfd4-psk5g" Jan 20 01:36:16.110631 kubelet[2683]: I0120 01:36:16.109720 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea506b49-1ce0-4278-a723-d51ad8fec903-whisker-ca-bundle\") pod \"whisker-6b6cd5cfd4-psk5g\" (UID: \"ea506b49-1ce0-4278-a723-d51ad8fec903\") " pod="calico-system/whisker-6b6cd5cfd4-psk5g" Jan 20 01:36:16.277556 containerd[1504]: time="2026-01-20T01:36:16.277469870Z" level=info msg="StopPodSandbox for \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\"" Jan 20 01:36:16.300742 kubelet[2683]: I0120 01:36:16.300509 2683 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06135b5f-1f1c-49a1-be4c-0a62e543b91b" path="/var/lib/kubelet/pods/06135b5f-1f1c-49a1-be4c-0a62e543b91b/volumes" Jan 20 01:36:16.369102 containerd[1504]: time="2026-01-20T01:36:16.368982754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b6cd5cfd4-psk5g,Uid:ea506b49-1ce0-4278-a723-d51ad8fec903,Namespace:calico-system,Attempt:0,}" Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.381 [INFO][4199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.382 [INFO][4199] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" iface="eth0" netns="/var/run/netns/cni-0dc9be99-f714-0ed7-8d41-43ce29f7024e" Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.382 [INFO][4199] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" iface="eth0" netns="/var/run/netns/cni-0dc9be99-f714-0ed7-8d41-43ce29f7024e" Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.383 [INFO][4199] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" iface="eth0" netns="/var/run/netns/cni-0dc9be99-f714-0ed7-8d41-43ce29f7024e" Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.383 [INFO][4199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.383 [INFO][4199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.445 [INFO][4206] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" HandleID="k8s-pod-network.4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.446 [INFO][4206] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.446 [INFO][4206] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.458 [WARNING][4206] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" HandleID="k8s-pod-network.4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.458 [INFO][4206] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" HandleID="k8s-pod-network.4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.463 [INFO][4206] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:16.469492 containerd[1504]: 2026-01-20 01:36:16.466 [INFO][4199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:16.472399 containerd[1504]: time="2026-01-20T01:36:16.469696873Z" level=info msg="TearDown network for sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\" successfully" Jan 20 01:36:16.472399 containerd[1504]: time="2026-01-20T01:36:16.469734519Z" level=info msg="StopPodSandbox for \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\" returns successfully" Jan 20 01:36:16.477149 containerd[1504]: time="2026-01-20T01:36:16.476673660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdqf6,Uid:fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9,Namespace:calico-system,Attempt:1,}" Jan 20 01:36:16.623327 systemd-networkd[1413]: calif0e72430bb6: Link UP Jan 20 01:36:16.626692 systemd-networkd[1413]: calif0e72430bb6: Gained carrier Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.455 [INFO][4211] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.479 [INFO][4211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0 whisker-6b6cd5cfd4- calico-system ea506b49-1ce0-4278-a723-d51ad8fec903 973 0 2026-01-20 01:36:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b6cd5cfd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-nmle2.gb1.brightbox.com whisker-6b6cd5cfd4-psk5g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif0e72430bb6 [] [] }} ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Namespace="calico-system" Pod="whisker-6b6cd5cfd4-psk5g" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.480 [INFO][4211] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Namespace="calico-system" Pod="whisker-6b6cd5cfd4-psk5g" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.542 [INFO][4224] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" HandleID="k8s-pod-network.ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.542 [INFO][4224] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" HandleID="k8s-pod-network.ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-nmle2.gb1.brightbox.com", "pod":"whisker-6b6cd5cfd4-psk5g", "timestamp":"2026-01-20 01:36:16.542566026 +0000 UTC"}, Hostname:"srv-nmle2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.542 [INFO][4224] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.543 [INFO][4224] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.543 [INFO][4224] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nmle2.gb1.brightbox.com' Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.556 [INFO][4224] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.565 [INFO][4224] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.576 [INFO][4224] ipam/ipam.go 511: Trying affinity for 192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.579 [INFO][4224] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.583 [INFO][4224] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.583 [INFO][4224] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.192/26 handle="k8s-pod-network.ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.586 [INFO][4224] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.596 [INFO][4224] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.192/26 handle="k8s-pod-network.ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.604 [INFO][4224] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.194/26] block=192.168.84.192/26 handle="k8s-pod-network.ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.604 [INFO][4224] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.194/26] handle="k8s-pod-network.ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.604 [INFO][4224] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:16.664058 containerd[1504]: 2026-01-20 01:36:16.604 [INFO][4224] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.194/26] IPv6=[] ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" HandleID="k8s-pod-network.ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0" Jan 20 01:36:16.672511 containerd[1504]: 2026-01-20 01:36:16.613 [INFO][4211] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Namespace="calico-system" Pod="whisker-6b6cd5cfd4-psk5g" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0", GenerateName:"whisker-6b6cd5cfd4-", Namespace:"calico-system", SelfLink:"", UID:"ea506b49-1ce0-4278-a723-d51ad8fec903", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 36, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b6cd5cfd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6b6cd5cfd4-psk5g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.84.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif0e72430bb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:16.672511 containerd[1504]: 2026-01-20 01:36:16.614 [INFO][4211] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.194/32] ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Namespace="calico-system" Pod="whisker-6b6cd5cfd4-psk5g" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0" Jan 20 01:36:16.672511 containerd[1504]: 2026-01-20 01:36:16.614 [INFO][4211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0e72430bb6 ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Namespace="calico-system" Pod="whisker-6b6cd5cfd4-psk5g" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0" Jan 20 01:36:16.672511 containerd[1504]: 2026-01-20 01:36:16.628 [INFO][4211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Namespace="calico-system" Pod="whisker-6b6cd5cfd4-psk5g" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0" Jan 20 01:36:16.672511 containerd[1504]: 2026-01-20 01:36:16.629 [INFO][4211] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Namespace="calico-system" Pod="whisker-6b6cd5cfd4-psk5g" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0", GenerateName:"whisker-6b6cd5cfd4-", Namespace:"calico-system", SelfLink:"", UID:"ea506b49-1ce0-4278-a723-d51ad8fec903", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 36, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b6cd5cfd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd", Pod:"whisker-6b6cd5cfd4-psk5g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.84.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif0e72430bb6", MAC:"46:fa:58:90:71:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:16.672511 containerd[1504]: 2026-01-20 01:36:16.654 [INFO][4211] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd" Namespace="calico-system" Pod="whisker-6b6cd5cfd4-psk5g" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-whisker--6b6cd5cfd4--psk5g-eth0" Jan 20 01:36:16.746075 containerd[1504]: time="2026-01-20T01:36:16.734474154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:36:16.746580 containerd[1504]: time="2026-01-20T01:36:16.746297993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:36:16.746580 containerd[1504]: time="2026-01-20T01:36:16.746380594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:16.748161 containerd[1504]: time="2026-01-20T01:36:16.748078160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:16.777568 systemd-networkd[1413]: cali5e809eb2ead: Link UP Jan 20 01:36:16.779801 systemd-networkd[1413]: cali5e809eb2ead: Gained carrier Jan 20 01:36:16.822242 systemd-networkd[1413]: cali2c7211c5b34: Gained IPv6LL Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.568 [INFO][4228] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.592 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0 csi-node-driver- calico-system fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9 976 0 2026-01-20 01:35:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-nmle2.gb1.brightbox.com csi-node-driver-wdqf6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5e809eb2ead [] [] }} ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Namespace="calico-system" Pod="csi-node-driver-wdqf6" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.593 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Namespace="calico-system" Pod="csi-node-driver-wdqf6" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.658 [INFO][4243] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" HandleID="k8s-pod-network.e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.659 [INFO][4243] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" HandleID="k8s-pod-network.e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-nmle2.gb1.brightbox.com", "pod":"csi-node-driver-wdqf6", "timestamp":"2026-01-20 01:36:16.658970368 +0000 UTC"}, Hostname:"srv-nmle2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.659 [INFO][4243] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.659 [INFO][4243] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.659 [INFO][4243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nmle2.gb1.brightbox.com' Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.675 [INFO][4243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.687 [INFO][4243] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.699 [INFO][4243] ipam/ipam.go 511: Trying affinity for 192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.704 [INFO][4243] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.712 [INFO][4243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.712 [INFO][4243] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.192/26 handle="k8s-pod-network.e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.717 [INFO][4243] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.732 [INFO][4243] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.192/26 handle="k8s-pod-network.e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.753 [INFO][4243] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.195/26] block=192.168.84.192/26 handle="k8s-pod-network.e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.753 [INFO][4243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.195/26] handle="k8s-pod-network.e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.753 [INFO][4243] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:16.825540 containerd[1504]: 2026-01-20 01:36:16.753 [INFO][4243] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.195/26] IPv6=[] ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" HandleID="k8s-pod-network.e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:16.828379 containerd[1504]: 2026-01-20 01:36:16.760 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Namespace="calico-system" Pod="csi-node-driver-wdqf6" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-wdqf6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e809eb2ead", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:16.828379 containerd[1504]: 2026-01-20 01:36:16.761 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.195/32] ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Namespace="calico-system" Pod="csi-node-driver-wdqf6" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:16.828379 containerd[1504]: 2026-01-20 01:36:16.761 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e809eb2ead ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Namespace="calico-system" Pod="csi-node-driver-wdqf6" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:16.828379 containerd[1504]: 2026-01-20 01:36:16.780 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Namespace="calico-system" Pod="csi-node-driver-wdqf6" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:16.828379 containerd[1504]: 2026-01-20 01:36:16.783 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Namespace="calico-system" Pod="csi-node-driver-wdqf6" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a", Pod:"csi-node-driver-wdqf6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e809eb2ead", MAC:"72:ba:1f:2c:c8:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:16.828379 containerd[1504]: 2026-01-20 01:36:16.819 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a" Namespace="calico-system" Pod="csi-node-driver-wdqf6" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:16.835382 systemd[1]: Started cri-containerd-ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd.scope - libcontainer container ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd. Jan 20 01:36:16.850525 systemd[1]: run-netns-cni\x2d0dc9be99\x2df714\x2d0ed7\x2d8d41\x2d43ce29f7024e.mount: Deactivated successfully. Jan 20 01:36:16.894686 kubelet[2683]: I0120 01:36:16.894554 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nlp7d" podStartSLOduration=50.894516016 podStartE2EDuration="50.894516016s" podCreationTimestamp="2026-01-20 01:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:36:16.892646433 +0000 UTC m=+56.881019575" watchObservedRunningTime="2026-01-20 01:36:16.894516016 +0000 UTC m=+56.882889150" Jan 20 01:36:16.910092 containerd[1504]: time="2026-01-20T01:36:16.909341725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:36:16.910092 containerd[1504]: time="2026-01-20T01:36:16.909430201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:36:16.910092 containerd[1504]: time="2026-01-20T01:36:16.909469861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:16.910092 containerd[1504]: time="2026-01-20T01:36:16.909653981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:16.972891 systemd[1]: Started cri-containerd-e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a.scope - libcontainer container e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a. Jan 20 01:36:17.123885 containerd[1504]: time="2026-01-20T01:36:17.123686573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdqf6,Uid:fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9,Namespace:calico-system,Attempt:1,} returns sandbox id \"e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a\"" Jan 20 01:36:17.138758 containerd[1504]: time="2026-01-20T01:36:17.138396198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:36:17.224351 containerd[1504]: time="2026-01-20T01:36:17.223660374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b6cd5cfd4-psk5g,Uid:ea506b49-1ce0-4278-a723-d51ad8fec903,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec46446c3d328fcd00158bb6ee55662d1d9d7936f54c55f3f9de4762cf3adecd\"" Jan 20 01:36:17.278646 containerd[1504]: time="2026-01-20T01:36:17.278112904Z" level=info msg="StopPodSandbox for \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\"" Jan 20 01:36:17.526230 containerd[1504]: time="2026-01-20T01:36:17.525611935Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:17.532079 containerd[1504]: time="2026-01-20T01:36:17.531978485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:36:17.558597 containerd[1504]: time="2026-01-20T01:36:17.532454389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:36:17.562451 kubelet[2683]: E0120 01:36:17.558516 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:36:17.562451 kubelet[2683]: E0120 01:36:17.562017 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:36:17.566451 containerd[1504]: time="2026-01-20T01:36:17.565442511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:36:17.588164 kubelet[2683]: E0120 01:36:17.580731 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wdqf6_calico-system(fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.458 [INFO][4422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.459 [INFO][4422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" iface="eth0" netns="/var/run/netns/cni-63a41512-08d6-dc09-7c76-f54144944cf2" Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.459 [INFO][4422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" iface="eth0" netns="/var/run/netns/cni-63a41512-08d6-dc09-7c76-f54144944cf2" Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.460 [INFO][4422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" iface="eth0" netns="/var/run/netns/cni-63a41512-08d6-dc09-7c76-f54144944cf2" Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.460 [INFO][4422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.460 [INFO][4422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.593 [INFO][4449] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" HandleID="k8s-pod-network.a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.594 [INFO][4449] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.594 [INFO][4449] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.625 [WARNING][4449] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" HandleID="k8s-pod-network.a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.626 [INFO][4449] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" HandleID="k8s-pod-network.a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.630 [INFO][4449] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:17.647533 containerd[1504]: 2026-01-20 01:36:17.637 [INFO][4422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:17.657649 containerd[1504]: time="2026-01-20T01:36:17.651586230Z" level=info msg="TearDown network for sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\" successfully" Jan 20 01:36:17.657649 containerd[1504]: time="2026-01-20T01:36:17.651655938Z" level=info msg="StopPodSandbox for \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\" returns successfully" Jan 20 01:36:17.657162 systemd[1]: run-netns-cni\x2d63a41512\x2d08d6\x2ddc09\x2d7c76\x2df54144944cf2.mount: Deactivated successfully. Jan 20 01:36:17.663345 containerd[1504]: time="2026-01-20T01:36:17.663298071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wkmq8,Uid:fa1696f7-a972-4ca4-8cc6-bdf816751e94,Namespace:kube-system,Attempt:1,}" Jan 20 01:36:17.848727 systemd-networkd[1413]: cali5e809eb2ead: Gained IPv6LL Jan 20 01:36:17.928357 containerd[1504]: time="2026-01-20T01:36:17.928294992Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:17.937013 containerd[1504]: time="2026-01-20T01:36:17.935786189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:36:17.937013 containerd[1504]: time="2026-01-20T01:36:17.937038837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:36:17.938838 kubelet[2683]: E0120 01:36:17.938016 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:36:17.938838 kubelet[2683]: E0120 01:36:17.938344 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:36:17.938838 kubelet[2683]: E0120 01:36:17.938563 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b6cd5cfd4-psk5g_calico-system(ea506b49-1ce0-4278-a723-d51ad8fec903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:17.940725 containerd[1504]: time="2026-01-20T01:36:17.940094291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:36:17.992193 systemd-networkd[1413]: cali186e51d402b: Link UP Jan 20 01:36:17.994501 systemd-networkd[1413]: cali186e51d402b: Gained carrier Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.781 [INFO][4459] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.803 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0 coredns-66bc5c9577- kube-system fa1696f7-a972-4ca4-8cc6-bdf816751e94 996 0 2026-01-20 01:35:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-nmle2.gb1.brightbox.com coredns-66bc5c9577-wkmq8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali186e51d402b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Namespace="kube-system" Pod="coredns-66bc5c9577-wkmq8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.803 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Namespace="kube-system" Pod="coredns-66bc5c9577-wkmq8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.899 [INFO][4469] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" HandleID="k8s-pod-network.a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.899 [INFO][4469] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" HandleID="k8s-pod-network.a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfeb0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-nmle2.gb1.brightbox.com", "pod":"coredns-66bc5c9577-wkmq8", "timestamp":"2026-01-20 01:36:17.899644102 +0000 UTC"}, Hostname:"srv-nmle2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.900 [INFO][4469] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.900 [INFO][4469] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.900 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nmle2.gb1.brightbox.com' Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.919 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.929 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.943 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.947 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.951 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.951 [INFO][4469] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.192/26 handle="k8s-pod-network.a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.954 [INFO][4469] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7 Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.962 [INFO][4469] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.192/26 handle="k8s-pod-network.a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.978 [INFO][4469] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.196/26] block=192.168.84.192/26 handle="k8s-pod-network.a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.978 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.196/26] handle="k8s-pod-network.a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.978 [INFO][4469] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:18.038668 containerd[1504]: 2026-01-20 01:36:17.978 [INFO][4469] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.196/26] IPv6=[] ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" HandleID="k8s-pod-network.a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:18.042023 containerd[1504]: 2026-01-20 01:36:17.982 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Namespace="kube-system" Pod="coredns-66bc5c9577-wkmq8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fa1696f7-a972-4ca4-8cc6-bdf816751e94", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-wkmq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali186e51d402b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:18.042023 containerd[1504]: 2026-01-20 01:36:17.982 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.196/32] ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Namespace="kube-system" Pod="coredns-66bc5c9577-wkmq8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:18.042023 containerd[1504]: 2026-01-20 01:36:17.982 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali186e51d402b ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Namespace="kube-system" Pod="coredns-66bc5c9577-wkmq8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:18.042023 containerd[1504]: 2026-01-20 01:36:17.996 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Namespace="kube-system" Pod="coredns-66bc5c9577-wkmq8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:18.042023 containerd[1504]: 2026-01-20 01:36:17.996 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Namespace="kube-system" Pod="coredns-66bc5c9577-wkmq8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fa1696f7-a972-4ca4-8cc6-bdf816751e94", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7", Pod:"coredns-66bc5c9577-wkmq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali186e51d402b", MAC:"a2:ce:68:fa:74:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:18.042486 containerd[1504]: 2026-01-20 01:36:18.024 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7" Namespace="kube-system" Pod="coredns-66bc5c9577-wkmq8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:18.103929 systemd-networkd[1413]: calif0e72430bb6: Gained IPv6LL Jan 20 01:36:18.121231 containerd[1504]: time="2026-01-20T01:36:18.120203575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:36:18.122782 containerd[1504]: time="2026-01-20T01:36:18.121046004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:36:18.122782 containerd[1504]: time="2026-01-20T01:36:18.122749234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:18.123038 containerd[1504]: time="2026-01-20T01:36:18.122876137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:18.180195 systemd[1]: Started cri-containerd-a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7.scope - libcontainer container a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7. Jan 20 01:36:18.260967 containerd[1504]: time="2026-01-20T01:36:18.260654223Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:18.264548 containerd[1504]: time="2026-01-20T01:36:18.263017193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:36:18.264548 containerd[1504]: time="2026-01-20T01:36:18.263121410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:36:18.264712 kubelet[2683]: E0120 01:36:18.263306 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:36:18.264712 kubelet[2683]: E0120 01:36:18.263376 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:36:18.265491 kubelet[2683]: E0120 01:36:18.265334 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wdqf6_calico-system(fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:18.265659 containerd[1504]: time="2026-01-20T01:36:18.265371871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:36:18.266610 kubelet[2683]: E0120 01:36:18.265445 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:36:18.289509 containerd[1504]: time="2026-01-20T01:36:18.289456381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wkmq8,Uid:fa1696f7-a972-4ca4-8cc6-bdf816751e94,Namespace:kube-system,Attempt:1,} returns sandbox id \"a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7\"" Jan 20 01:36:18.305788 containerd[1504]: time="2026-01-20T01:36:18.305735896Z" level=info msg="CreateContainer within sandbox \"a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:36:18.364003 containerd[1504]: time="2026-01-20T01:36:18.362793997Z" level=info msg="CreateContainer within sandbox \"a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6248722342eecf5c5a2e4b4385b0d3af5b0704e4c3cac1656b8cb4f34d48a9c\"" Jan 20 01:36:18.367365 containerd[1504]: time="2026-01-20T01:36:18.367120092Z" level=info msg="StartContainer for \"e6248722342eecf5c5a2e4b4385b0d3af5b0704e4c3cac1656b8cb4f34d48a9c\"" Jan 20 01:36:18.440390 systemd[1]: Started cri-containerd-e6248722342eecf5c5a2e4b4385b0d3af5b0704e4c3cac1656b8cb4f34d48a9c.scope - libcontainer container e6248722342eecf5c5a2e4b4385b0d3af5b0704e4c3cac1656b8cb4f34d48a9c. Jan 20 01:36:18.500797 containerd[1504]: time="2026-01-20T01:36:18.500696506Z" level=info msg="StartContainer for \"e6248722342eecf5c5a2e4b4385b0d3af5b0704e4c3cac1656b8cb4f34d48a9c\" returns successfully" Jan 20 01:36:18.604046 containerd[1504]: time="2026-01-20T01:36:18.603973406Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:18.612680 containerd[1504]: time="2026-01-20T01:36:18.612536310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:36:18.612680 containerd[1504]: time="2026-01-20T01:36:18.612609977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:36:18.613698 kubelet[2683]: E0120 01:36:18.613103 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:36:18.613698 kubelet[2683]: E0120 01:36:18.613179 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:36:18.613698 kubelet[2683]: E0120 01:36:18.613300 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b6cd5cfd4-psk5g_calico-system(ea506b49-1ce0-4278-a723-d51ad8fec903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:18.615115 kubelet[2683]: E0120 01:36:18.613363 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b6cd5cfd4-psk5g" podUID="ea506b49-1ce0-4278-a723-d51ad8fec903" Jan 20 01:36:18.775964 kernel: bpftool[4598]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 20 01:36:18.882068 kubelet[2683]: E0120 01:36:18.881849 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b6cd5cfd4-psk5g" podUID="ea506b49-1ce0-4278-a723-d51ad8fec903" Jan 20 01:36:18.883417 kubelet[2683]: E0120 01:36:18.883002 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:36:18.943705 kubelet[2683]: I0120 01:36:18.943555 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wkmq8" podStartSLOduration=52.943257339 podStartE2EDuration="52.943257339s" podCreationTimestamp="2026-01-20 01:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:36:18.941222965 +0000 UTC m=+58.929596118" watchObservedRunningTime="2026-01-20 01:36:18.943257339 +0000 UTC m=+58.931630476" Jan 20 01:36:19.127176 systemd-networkd[1413]: cali186e51d402b: Gained IPv6LL Jan 20 01:36:19.205281 systemd-networkd[1413]: vxlan.calico: Link UP Jan 20 01:36:19.205293 systemd-networkd[1413]: vxlan.calico: Gained carrier Jan 20 01:36:20.245410 containerd[1504]: time="2026-01-20T01:36:20.245307504Z" level=info msg="StopPodSandbox for \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\"" Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.319 [WARNING][4692] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a", Pod:"csi-node-driver-wdqf6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e809eb2ead", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.320 [INFO][4692] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.320 [INFO][4692] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" iface="eth0" netns="" Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.320 [INFO][4692] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.320 [INFO][4692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.406 [INFO][4699] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" HandleID="k8s-pod-network.4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.407 [INFO][4699] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.407 [INFO][4699] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.430 [WARNING][4699] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" HandleID="k8s-pod-network.4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.431 [INFO][4699] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" HandleID="k8s-pod-network.4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.434 [INFO][4699] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:20.440058 containerd[1504]: 2026-01-20 01:36:20.437 [INFO][4692] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:20.441833 containerd[1504]: time="2026-01-20T01:36:20.440226177Z" level=info msg="TearDown network for sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\" successfully" Jan 20 01:36:20.441833 containerd[1504]: time="2026-01-20T01:36:20.440263173Z" level=info msg="StopPodSandbox for \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\" returns successfully" Jan 20 01:36:20.441833 containerd[1504]: time="2026-01-20T01:36:20.441706062Z" level=info msg="RemovePodSandbox for \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\"" Jan 20 01:36:20.441833 containerd[1504]: time="2026-01-20T01:36:20.441769938Z" level=info msg="Forcibly stopping sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\"" Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.525 [WARNING][4717] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"e19b092fdda5423ab7bc58e2324785c6efb13a796584c1c8cbf2d800f104389a", Pod:"csi-node-driver-wdqf6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e809eb2ead", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.525 [INFO][4717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.525 [INFO][4717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" iface="eth0" netns="" Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.525 [INFO][4717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.526 [INFO][4717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.582 [INFO][4725] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" HandleID="k8s-pod-network.4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.582 [INFO][4725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.582 [INFO][4725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.593 [WARNING][4725] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" HandleID="k8s-pod-network.4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.593 [INFO][4725] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" HandleID="k8s-pod-network.4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Workload="srv--nmle2.gb1.brightbox.com-k8s-csi--node--driver--wdqf6-eth0" Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.597 [INFO][4725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:20.603151 containerd[1504]: 2026-01-20 01:36:20.600 [INFO][4717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e" Jan 20 01:36:20.605394 containerd[1504]: time="2026-01-20T01:36:20.604031741Z" level=info msg="TearDown network for sandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\" successfully" Jan 20 01:36:20.616082 containerd[1504]: time="2026-01-20T01:36:20.616020034Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:36:20.616210 containerd[1504]: time="2026-01-20T01:36:20.616141036Z" level=info msg="RemovePodSandbox \"4e5518f863ded32cb241e75288f081682f85594600d6b0311d6c49dff44a6d4e\" returns successfully" Jan 20 01:36:20.617958 containerd[1504]: time="2026-01-20T01:36:20.617434930Z" level=info msg="StopPodSandbox for \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\"" Jan 20 01:36:20.728049 systemd-networkd[1413]: vxlan.calico: Gained IPv6LL Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.675 [WARNING][4739] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a922b510-6dd4-4211-8d8f-a8df2985776c", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa", Pod:"coredns-66bc5c9577-nlp7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c7211c5b34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.675 [INFO][4739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.675 [INFO][4739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" iface="eth0" netns="" Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.675 [INFO][4739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.675 [INFO][4739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.711 [INFO][4747] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" HandleID="k8s-pod-network.e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.711 [INFO][4747] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.711 [INFO][4747] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.722 [WARNING][4747] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" HandleID="k8s-pod-network.e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.722 [INFO][4747] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" HandleID="k8s-pod-network.e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.724 [INFO][4747] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:20.733970 containerd[1504]: 2026-01-20 01:36:20.730 [INFO][4739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:20.733970 containerd[1504]: time="2026-01-20T01:36:20.733297635Z" level=info msg="TearDown network for sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\" successfully" Jan 20 01:36:20.733970 containerd[1504]: time="2026-01-20T01:36:20.733342287Z" level=info msg="StopPodSandbox for \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\" returns successfully" Jan 20 01:36:20.734893 containerd[1504]: time="2026-01-20T01:36:20.734137978Z" level=info msg="RemovePodSandbox for \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\"" Jan 20 01:36:20.734893 containerd[1504]: time="2026-01-20T01:36:20.734174519Z" level=info msg="Forcibly stopping sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\"" Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.788 [WARNING][4761] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a922b510-6dd4-4211-8d8f-a8df2985776c", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"9b3f0dcfd6f8f756572ccdf8c8f41cb6dd7268ff2954d401abf410ed644f54fa", Pod:"coredns-66bc5c9577-nlp7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c7211c5b34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.789 [INFO][4761] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.789 [INFO][4761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" iface="eth0" netns="" Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.789 [INFO][4761] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.789 [INFO][4761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.818 [INFO][4768] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" HandleID="k8s-pod-network.e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.819 [INFO][4768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.819 [INFO][4768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.828 [WARNING][4768] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" HandleID="k8s-pod-network.e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.828 [INFO][4768] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" HandleID="k8s-pod-network.e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--nlp7d-eth0" Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.831 [INFO][4768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:20.834983 containerd[1504]: 2026-01-20 01:36:20.833 [INFO][4761] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824" Jan 20 01:36:20.836462 containerd[1504]: time="2026-01-20T01:36:20.835044010Z" level=info msg="TearDown network for sandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\" successfully" Jan 20 01:36:20.838813 containerd[1504]: time="2026-01-20T01:36:20.838774447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:36:20.838904 containerd[1504]: time="2026-01-20T01:36:20.838851048Z" level=info msg="RemovePodSandbox \"e641ece62b54681f4071e5aa93a65612b604a4dcde30999c80a1c2bb1b76e824\" returns successfully" Jan 20 01:36:20.839975 containerd[1504]: time="2026-01-20T01:36:20.839634731Z" level=info msg="StopPodSandbox for \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\"" Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.891 [WARNING][4782] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fa1696f7-a972-4ca4-8cc6-bdf816751e94", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7", Pod:"coredns-66bc5c9577-wkmq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali186e51d402b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.893 [INFO][4782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.893 [INFO][4782] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" iface="eth0" netns="" Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.893 [INFO][4782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.893 [INFO][4782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.928 [INFO][4789] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" HandleID="k8s-pod-network.a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.929 [INFO][4789] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.929 [INFO][4789] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.941 [WARNING][4789] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" HandleID="k8s-pod-network.a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.941 [INFO][4789] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" HandleID="k8s-pod-network.a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.944 [INFO][4789] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:20.948552 containerd[1504]: 2026-01-20 01:36:20.946 [INFO][4782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:20.950198 containerd[1504]: time="2026-01-20T01:36:20.949357269Z" level=info msg="TearDown network for sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\" successfully" Jan 20 01:36:20.950198 containerd[1504]: time="2026-01-20T01:36:20.949389267Z" level=info msg="StopPodSandbox for \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\" returns successfully" Jan 20 01:36:20.951188 containerd[1504]: time="2026-01-20T01:36:20.950689571Z" level=info msg="RemovePodSandbox for \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\"" Jan 20 01:36:20.951188 containerd[1504]: time="2026-01-20T01:36:20.950724735Z" level=info msg="Forcibly stopping sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\"" Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.030 [WARNING][4804] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fa1696f7-a972-4ca4-8cc6-bdf816751e94", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"a1b3687ff97e51ac7cb2193f9fed34bf5b21c421afaade79fdb597ba2b4ce2d7", Pod:"coredns-66bc5c9577-wkmq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali186e51d402b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.030 [INFO][4804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.030 [INFO][4804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" iface="eth0" netns="" Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.030 [INFO][4804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.030 [INFO][4804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.058 [INFO][4812] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" HandleID="k8s-pod-network.a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.059 [INFO][4812] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.059 [INFO][4812] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.068 [WARNING][4812] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" HandleID="k8s-pod-network.a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.068 [INFO][4812] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" HandleID="k8s-pod-network.a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Workload="srv--nmle2.gb1.brightbox.com-k8s-coredns--66bc5c9577--wkmq8-eth0" Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.070 [INFO][4812] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:21.074463 containerd[1504]: 2026-01-20 01:36:21.072 [INFO][4804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2" Jan 20 01:36:21.076167 containerd[1504]: time="2026-01-20T01:36:21.074519506Z" level=info msg="TearDown network for sandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\" successfully" Jan 20 01:36:21.078088 containerd[1504]: time="2026-01-20T01:36:21.077984734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:36:21.078219 containerd[1504]: time="2026-01-20T01:36:21.078112268Z" level=info msg="RemovePodSandbox \"a52e569aca1d084770db88e7e33931ec76d629a8cbfb53b774cf391c92e291f2\" returns successfully" Jan 20 01:36:21.079395 containerd[1504]: time="2026-01-20T01:36:21.079031204Z" level=info msg="StopPodSandbox for \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\"" Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.136 [WARNING][4826] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.136 [INFO][4826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.136 [INFO][4826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" iface="eth0" netns="" Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.136 [INFO][4826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.136 [INFO][4826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.170 [INFO][4833] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" HandleID="k8s-pod-network.45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.170 [INFO][4833] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.170 [INFO][4833] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.180 [WARNING][4833] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" HandleID="k8s-pod-network.45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.180 [INFO][4833] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" HandleID="k8s-pod-network.45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.184 [INFO][4833] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:21.188069 containerd[1504]: 2026-01-20 01:36:21.186 [INFO][4826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:21.188069 containerd[1504]: time="2026-01-20T01:36:21.188095280Z" level=info msg="TearDown network for sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\" successfully" Jan 20 01:36:21.188069 containerd[1504]: time="2026-01-20T01:36:21.188135219Z" level=info msg="StopPodSandbox for \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\" returns successfully" Jan 20 01:36:21.190102 containerd[1504]: time="2026-01-20T01:36:21.188891867Z" level=info msg="RemovePodSandbox for \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\"" Jan 20 01:36:21.190102 containerd[1504]: time="2026-01-20T01:36:21.188930929Z" level=info msg="Forcibly stopping sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\"" Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.241 [WARNING][4847] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.241 [INFO][4847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.241 [INFO][4847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" iface="eth0" netns="" Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.241 [INFO][4847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.241 [INFO][4847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.275 [INFO][4854] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" HandleID="k8s-pod-network.45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.275 [INFO][4854] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.275 [INFO][4854] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.285 [WARNING][4854] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" HandleID="k8s-pod-network.45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.285 [INFO][4854] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" HandleID="k8s-pod-network.45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Workload="srv--nmle2.gb1.brightbox.com-k8s-whisker--6ffbdbf646--685v4-eth0" Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.288 [INFO][4854] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:21.292210 containerd[1504]: 2026-01-20 01:36:21.290 [INFO][4847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009" Jan 20 01:36:21.292210 containerd[1504]: time="2026-01-20T01:36:21.292149610Z" level=info msg="TearDown network for sandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\" successfully" Jan 20 01:36:21.296383 containerd[1504]: time="2026-01-20T01:36:21.296347301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:36:21.296473 containerd[1504]: time="2026-01-20T01:36:21.296451010Z" level=info msg="RemovePodSandbox \"45bd9d3714366e26dfb93fc28674b677cecf57df1386be72685f42e717742009\" returns successfully" Jan 20 01:36:25.909555 systemd[1]: Started sshd@15-10.230.15.2:22-134.209.94.87:55786.service - OpenSSH per-connection server daemon (134.209.94.87:55786). Jan 20 01:36:26.067906 sshd[4869]: Connection closed by authenticating user root 134.209.94.87 port 55786 [preauth] Jan 20 01:36:26.071668 systemd[1]: sshd@15-10.230.15.2:22-134.209.94.87:55786.service: Deactivated successfully. Jan 20 01:36:26.278951 containerd[1504]: time="2026-01-20T01:36:26.277845890Z" level=info msg="StopPodSandbox for \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\"" Jan 20 01:36:26.278951 containerd[1504]: time="2026-01-20T01:36:26.278031896Z" level=info msg="StopPodSandbox for \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\"" Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.382 [INFO][4897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.383 [INFO][4897] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" iface="eth0" netns="/var/run/netns/cni-69fda2e2-7c6a-a8c4-65d2-08fb036ed692" Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.384 [INFO][4897] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" iface="eth0" netns="/var/run/netns/cni-69fda2e2-7c6a-a8c4-65d2-08fb036ed692" Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.386 [INFO][4897] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" iface="eth0" netns="/var/run/netns/cni-69fda2e2-7c6a-a8c4-65d2-08fb036ed692" Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.386 [INFO][4897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.386 [INFO][4897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.436 [INFO][4910] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" HandleID="k8s-pod-network.01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.437 [INFO][4910] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.437 [INFO][4910] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.459 [WARNING][4910] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" HandleID="k8s-pod-network.01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.460 [INFO][4910] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" HandleID="k8s-pod-network.01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.463 [INFO][4910] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:26.471054 containerd[1504]: 2026-01-20 01:36:26.466 [INFO][4897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:36:26.472833 containerd[1504]: time="2026-01-20T01:36:26.471267671Z" level=info msg="TearDown network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\" successfully" Jan 20 01:36:26.472833 containerd[1504]: time="2026-01-20T01:36:26.471303996Z" level=info msg="StopPodSandbox for \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\" returns successfully" Jan 20 01:36:26.477544 systemd[1]: run-netns-cni\x2d69fda2e2\x2d7c6a\x2da8c4\x2d65d2\x2d08fb036ed692.mount: Deactivated successfully. Jan 20 01:36:26.477848 containerd[1504]: time="2026-01-20T01:36:26.477625870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d65cdbcf4-xqqft,Uid:708249e2-7049-4ff6-8bf2-b94a10ee1bca,Namespace:calico-system,Attempt:1,}" Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.382 [INFO][4898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.383 [INFO][4898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" iface="eth0" netns="/var/run/netns/cni-d71b97ec-cc5b-bbf2-1531-854cb48db0c1" Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.386 [INFO][4898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" iface="eth0" netns="/var/run/netns/cni-d71b97ec-cc5b-bbf2-1531-854cb48db0c1" Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.394 [INFO][4898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" iface="eth0" netns="/var/run/netns/cni-d71b97ec-cc5b-bbf2-1531-854cb48db0c1" Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.394 [INFO][4898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.394 [INFO][4898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.448 [INFO][4915] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" HandleID="k8s-pod-network.83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.448 [INFO][4915] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.463 [INFO][4915] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.483 [WARNING][4915] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" HandleID="k8s-pod-network.83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.484 [INFO][4915] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" HandleID="k8s-pod-network.83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.486 [INFO][4915] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:26.492214 containerd[1504]: 2026-01-20 01:36:26.488 [INFO][4898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:36:26.497034 containerd[1504]: time="2026-01-20T01:36:26.493027775Z" level=info msg="TearDown network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\" successfully" Jan 20 01:36:26.497034 containerd[1504]: time="2026-01-20T01:36:26.493055972Z" level=info msg="StopPodSandbox for \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\" returns successfully" Jan 20 01:36:26.497201 systemd[1]: run-netns-cni\x2dd71b97ec\x2dcc5b\x2dbbf2\x2d1531\x2d854cb48db0c1.mount: Deactivated successfully. Jan 20 01:36:26.502030 containerd[1504]: time="2026-01-20T01:36:26.499715804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6469cbc-qrwh4,Uid:72e0069f-0dfe-458b-8762-abad903cdba3,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:36:26.765833 systemd-networkd[1413]: caliabe6beaf2a2: Link UP Jan 20 01:36:26.768206 systemd-networkd[1413]: caliabe6beaf2a2: Gained carrier Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.583 [INFO][4923] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0 calico-kube-controllers-7d65cdbcf4- calico-system 708249e2-7049-4ff6-8bf2-b94a10ee1bca 1056 0 2026-01-20 01:35:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d65cdbcf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-nmle2.gb1.brightbox.com calico-kube-controllers-7d65cdbcf4-xqqft eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliabe6beaf2a2 [] [] }} ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Namespace="calico-system" Pod="calico-kube-controllers-7d65cdbcf4-xqqft" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.584 [INFO][4923] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Namespace="calico-system" Pod="calico-kube-controllers-7d65cdbcf4-xqqft" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.686 [INFO][4947] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" HandleID="k8s-pod-network.d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.686 [INFO][4947] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" HandleID="k8s-pod-network.d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039da20), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-nmle2.gb1.brightbox.com", "pod":"calico-kube-controllers-7d65cdbcf4-xqqft", "timestamp":"2026-01-20 01:36:26.686284748 +0000 UTC"}, Hostname:"srv-nmle2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.686 [INFO][4947] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.687 [INFO][4947] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.687 [INFO][4947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nmle2.gb1.brightbox.com' Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.702 [INFO][4947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.709 [INFO][4947] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.718 [INFO][4947] ipam/ipam.go 511: Trying affinity for 192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.721 [INFO][4947] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.725 [INFO][4947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.725 [INFO][4947] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.192/26 handle="k8s-pod-network.d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.727 [INFO][4947] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.736 [INFO][4947] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.192/26 handle="k8s-pod-network.d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.747 [INFO][4947] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.197/26] block=192.168.84.192/26 handle="k8s-pod-network.d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.747 [INFO][4947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.197/26] handle="k8s-pod-network.d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.748 [INFO][4947] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:26.808589 containerd[1504]: 2026-01-20 01:36:26.748 [INFO][4947] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.197/26] IPv6=[] ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" HandleID="k8s-pod-network.d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:36:26.811833 containerd[1504]: 2026-01-20 01:36:26.757 [INFO][4923] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Namespace="calico-system" Pod="calico-kube-controllers-7d65cdbcf4-xqqft" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0", GenerateName:"calico-kube-controllers-7d65cdbcf4-", Namespace:"calico-system", SelfLink:"", UID:"708249e2-7049-4ff6-8bf2-b94a10ee1bca", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d65cdbcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-7d65cdbcf4-xqqft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliabe6beaf2a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:26.811833 containerd[1504]: 2026-01-20 01:36:26.757 [INFO][4923] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.197/32] ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Namespace="calico-system" Pod="calico-kube-controllers-7d65cdbcf4-xqqft" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:36:26.811833 containerd[1504]: 2026-01-20 01:36:26.757 [INFO][4923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabe6beaf2a2 ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Namespace="calico-system" Pod="calico-kube-controllers-7d65cdbcf4-xqqft" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:36:26.811833 containerd[1504]: 2026-01-20 01:36:26.770 [INFO][4923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Namespace="calico-system" Pod="calico-kube-controllers-7d65cdbcf4-xqqft" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:36:26.811833 containerd[1504]: 2026-01-20 01:36:26.773 [INFO][4923] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Namespace="calico-system" Pod="calico-kube-controllers-7d65cdbcf4-xqqft" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0", GenerateName:"calico-kube-controllers-7d65cdbcf4-", Namespace:"calico-system", SelfLink:"", UID:"708249e2-7049-4ff6-8bf2-b94a10ee1bca", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d65cdbcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c", Pod:"calico-kube-controllers-7d65cdbcf4-xqqft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliabe6beaf2a2", MAC:"76:1a:78:5e:17:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:26.811833 containerd[1504]: 2026-01-20 01:36:26.805 [INFO][4923] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c" Namespace="calico-system" Pod="calico-kube-controllers-7d65cdbcf4-xqqft" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:36:26.881453 containerd[1504]: time="2026-01-20T01:36:26.881162952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:36:26.881770 containerd[1504]: time="2026-01-20T01:36:26.881398362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:36:26.882366 containerd[1504]: time="2026-01-20T01:36:26.881745916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:26.882483 containerd[1504]: time="2026-01-20T01:36:26.882097250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:26.946577 systemd[1]: Started cri-containerd-d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c.scope - libcontainer container d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c. Jan 20 01:36:26.967168 systemd-networkd[1413]: cali7e78c8d76f7: Link UP Jan 20 01:36:26.968607 systemd-networkd[1413]: cali7e78c8d76f7: Gained carrier Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.658 [INFO][4933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0 calico-apiserver-c6469cbc- calico-apiserver 72e0069f-0dfe-458b-8762-abad903cdba3 1055 0 2026-01-20 01:35:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c6469cbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-nmle2.gb1.brightbox.com calico-apiserver-c6469cbc-qrwh4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7e78c8d76f7 [] [] }} ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-qrwh4" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.658 [INFO][4933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-qrwh4" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.717 [INFO][4952] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" HandleID="k8s-pod-network.4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.718 [INFO][4952] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" HandleID="k8s-pod-network.4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-nmle2.gb1.brightbox.com", "pod":"calico-apiserver-c6469cbc-qrwh4", "timestamp":"2026-01-20 01:36:26.717845577 +0000 UTC"}, Hostname:"srv-nmle2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.718 [INFO][4952] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.749 [INFO][4952] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.749 [INFO][4952] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nmle2.gb1.brightbox.com' Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.814 [INFO][4952] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.846 [INFO][4952] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.878 [INFO][4952] ipam/ipam.go 511: Trying affinity for 192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.888 [INFO][4952] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.893 [INFO][4952] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.894 [INFO][4952] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.192/26 handle="k8s-pod-network.4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.908 [INFO][4952] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.921 [INFO][4952] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.192/26 handle="k8s-pod-network.4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.951 [INFO][4952] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.198/26] block=192.168.84.192/26 handle="k8s-pod-network.4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.951 [INFO][4952] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.198/26] handle="k8s-pod-network.4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.951 [INFO][4952] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:27.019379 containerd[1504]: 2026-01-20 01:36:26.951 [INFO][4952] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.198/26] IPv6=[] ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" HandleID="k8s-pod-network.4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:36:27.020466 containerd[1504]: 2026-01-20 01:36:26.956 [INFO][4933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-qrwh4" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0", GenerateName:"calico-apiserver-c6469cbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"72e0069f-0dfe-458b-8762-abad903cdba3", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6469cbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-c6469cbc-qrwh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e78c8d76f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:27.020466 containerd[1504]: 2026-01-20 01:36:26.959 [INFO][4933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.198/32] ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-qrwh4" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:36:27.020466 containerd[1504]: 2026-01-20 01:36:26.959 [INFO][4933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e78c8d76f7 ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-qrwh4" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:36:27.020466 containerd[1504]: 2026-01-20 01:36:26.971 [INFO][4933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-qrwh4" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:36:27.020466 containerd[1504]: 2026-01-20 01:36:26.972 [INFO][4933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-qrwh4" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0", GenerateName:"calico-apiserver-c6469cbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"72e0069f-0dfe-458b-8762-abad903cdba3", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6469cbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a", Pod:"calico-apiserver-c6469cbc-qrwh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e78c8d76f7", MAC:"6a:6f:85:9c:5b:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:27.020466 containerd[1504]: 2026-01-20 01:36:27.013 [INFO][4933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-qrwh4" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:36:27.072866 containerd[1504]: time="2026-01-20T01:36:27.072666784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:36:27.073253 containerd[1504]: time="2026-01-20T01:36:27.072870757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:36:27.073253 containerd[1504]: time="2026-01-20T01:36:27.072917327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:27.073253 containerd[1504]: time="2026-01-20T01:36:27.073124504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:27.124738 systemd[1]: Started cri-containerd-4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a.scope - libcontainer container 4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a. Jan 20 01:36:27.236983 containerd[1504]: time="2026-01-20T01:36:27.236834101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d65cdbcf4-xqqft,Uid:708249e2-7049-4ff6-8bf2-b94a10ee1bca,Namespace:calico-system,Attempt:1,} returns sandbox id \"d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c\"" Jan 20 01:36:27.244520 containerd[1504]: time="2026-01-20T01:36:27.242987207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:36:27.255840 containerd[1504]: time="2026-01-20T01:36:27.255773249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6469cbc-qrwh4,Uid:72e0069f-0dfe-458b-8762-abad903cdba3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a\"" Jan 20 01:36:27.276820 containerd[1504]: time="2026-01-20T01:36:27.275891351Z" level=info msg="StopPodSandbox for \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\"" Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.346 [INFO][5069] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.346 [INFO][5069] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" iface="eth0" netns="/var/run/netns/cni-39bb04a4-c92f-463a-3b51-3d85be73ed21" Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.347 [INFO][5069] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" iface="eth0" netns="/var/run/netns/cni-39bb04a4-c92f-463a-3b51-3d85be73ed21" Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.348 [INFO][5069] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" iface="eth0" netns="/var/run/netns/cni-39bb04a4-c92f-463a-3b51-3d85be73ed21" Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.349 [INFO][5069] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.349 [INFO][5069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.380 [INFO][5077] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" HandleID="k8s-pod-network.921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.381 [INFO][5077] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.381 [INFO][5077] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.396 [WARNING][5077] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" HandleID="k8s-pod-network.921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.396 [INFO][5077] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" HandleID="k8s-pod-network.921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.399 [INFO][5077] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:27.404882 containerd[1504]: 2026-01-20 01:36:27.402 [INFO][5069] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:36:27.407606 containerd[1504]: time="2026-01-20T01:36:27.405831485Z" level=info msg="TearDown network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\" successfully" Jan 20 01:36:27.407606 containerd[1504]: time="2026-01-20T01:36:27.405900868Z" level=info msg="StopPodSandbox for \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\" returns successfully" Jan 20 01:36:27.410996 containerd[1504]: time="2026-01-20T01:36:27.410912461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-x6hg8,Uid:2652f767-bf33-49f7-b353-182252d33510,Namespace:calico-system,Attempt:1,}" Jan 20 01:36:27.488808 systemd[1]: run-netns-cni\x2d39bb04a4\x2dc92f\x2d463a\x2d3b51\x2d3d85be73ed21.mount: Deactivated successfully. Jan 20 01:36:27.561755 containerd[1504]: time="2026-01-20T01:36:27.561465525Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:27.564304 containerd[1504]: time="2026-01-20T01:36:27.564166895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:36:27.564386 containerd[1504]: time="2026-01-20T01:36:27.564311587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:36:27.564862 kubelet[2683]: E0120 01:36:27.564683 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:36:27.564862 kubelet[2683]: E0120 01:36:27.564820 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:36:27.567571 kubelet[2683]: E0120 01:36:27.566516 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d65cdbcf4-xqqft_calico-system(708249e2-7049-4ff6-8bf2-b94a10ee1bca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:27.567571 kubelet[2683]: E0120 01:36:27.566803 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:36:27.568114 containerd[1504]: time="2026-01-20T01:36:27.566086335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:36:27.605613 systemd-networkd[1413]: cali31f593cd532: Link UP Jan 20 01:36:27.606811 systemd-networkd[1413]: cali31f593cd532: Gained carrier Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.472 [INFO][5083] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0 goldmane-7c778bb748- calico-system 2652f767-bf33-49f7-b353-182252d33510 1067 0 2026-01-20 01:35:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-nmle2.gb1.brightbox.com goldmane-7c778bb748-x6hg8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali31f593cd532 [] [] }} ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Namespace="calico-system" Pod="goldmane-7c778bb748-x6hg8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.472 [INFO][5083] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Namespace="calico-system" Pod="goldmane-7c778bb748-x6hg8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.537 [INFO][5095] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" HandleID="k8s-pod-network.4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.537 [INFO][5095] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" HandleID="k8s-pod-network.4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-nmle2.gb1.brightbox.com", "pod":"goldmane-7c778bb748-x6hg8", "timestamp":"2026-01-20 01:36:27.536995862 +0000 UTC"}, Hostname:"srv-nmle2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.537 [INFO][5095] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.537 [INFO][5095] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.537 [INFO][5095] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nmle2.gb1.brightbox.com' Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.548 [INFO][5095] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.556 [INFO][5095] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.567 [INFO][5095] ipam/ipam.go 511: Trying affinity for 192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.571 [INFO][5095] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.576 [INFO][5095] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.576 [INFO][5095] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.192/26 handle="k8s-pod-network.4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.578 [INFO][5095] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1 Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.583 [INFO][5095] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.192/26 handle="k8s-pod-network.4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.591 [INFO][5095] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.199/26] block=192.168.84.192/26 handle="k8s-pod-network.4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.592 [INFO][5095] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.199/26] handle="k8s-pod-network.4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.592 [INFO][5095] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:27.629433 containerd[1504]: 2026-01-20 01:36:27.592 [INFO][5095] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.199/26] IPv6=[] ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" HandleID="k8s-pod-network.4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:36:27.631907 containerd[1504]: 2026-01-20 01:36:27.595 [INFO][5083] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Namespace="calico-system" Pod="goldmane-7c778bb748-x6hg8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"2652f767-bf33-49f7-b353-182252d33510", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-7c778bb748-x6hg8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31f593cd532", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:27.631907 containerd[1504]: 2026-01-20 01:36:27.596 [INFO][5083] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.199/32] ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Namespace="calico-system" Pod="goldmane-7c778bb748-x6hg8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:36:27.631907 containerd[1504]: 2026-01-20 01:36:27.596 [INFO][5083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31f593cd532 ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Namespace="calico-system" Pod="goldmane-7c778bb748-x6hg8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:36:27.631907 containerd[1504]: 2026-01-20 01:36:27.608 [INFO][5083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Namespace="calico-system" Pod="goldmane-7c778bb748-x6hg8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:36:27.631907 containerd[1504]: 2026-01-20 01:36:27.609 [INFO][5083] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Namespace="calico-system" Pod="goldmane-7c778bb748-x6hg8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"2652f767-bf33-49f7-b353-182252d33510", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1", Pod:"goldmane-7c778bb748-x6hg8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31f593cd532", MAC:"56:73:8b:2a:32:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:27.631907 containerd[1504]: 2026-01-20 01:36:27.624 [INFO][5083] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1" Namespace="calico-system" Pod="goldmane-7c778bb748-x6hg8" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:36:27.672113 containerd[1504]: time="2026-01-20T01:36:27.670884186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:36:27.672113 containerd[1504]: time="2026-01-20T01:36:27.670981195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:36:27.672113 containerd[1504]: time="2026-01-20T01:36:27.671040018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:27.674409 containerd[1504]: time="2026-01-20T01:36:27.672001304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:27.722416 systemd[1]: Started cri-containerd-4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1.scope - libcontainer container 4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1. Jan 20 01:36:27.797221 containerd[1504]: time="2026-01-20T01:36:27.797079637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-x6hg8,Uid:2652f767-bf33-49f7-b353-182252d33510,Namespace:calico-system,Attempt:1,} returns sandbox id \"4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1\"" Jan 20 01:36:27.886096 containerd[1504]: time="2026-01-20T01:36:27.885890022Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:27.888263 containerd[1504]: time="2026-01-20T01:36:27.887990676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:36:27.888263 containerd[1504]: time="2026-01-20T01:36:27.888061861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:36:27.888563 kubelet[2683]: E0120 01:36:27.888341 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:36:27.888563 kubelet[2683]: E0120 01:36:27.888405 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:36:27.888783 kubelet[2683]: E0120 01:36:27.888671 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c6469cbc-qrwh4_calico-apiserver(72e0069f-0dfe-458b-8762-abad903cdba3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:27.888783 kubelet[2683]: E0120 01:36:27.888759 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:36:27.889538 containerd[1504]: time="2026-01-20T01:36:27.889463445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:36:27.946226 kubelet[2683]: E0120 01:36:27.946059 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:36:27.951958 kubelet[2683]: E0120 01:36:27.948974 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:36:28.150295 systemd-networkd[1413]: cali7e78c8d76f7: Gained IPv6LL Jan 20 01:36:28.199538 containerd[1504]: time="2026-01-20T01:36:28.199470330Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:28.200930 containerd[1504]: time="2026-01-20T01:36:28.200882163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:36:28.201080 containerd[1504]: time="2026-01-20T01:36:28.201032852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:36:28.208992 kubelet[2683]: E0120 01:36:28.201430 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:36:28.209135 kubelet[2683]: E0120 01:36:28.208995 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:36:28.209208 kubelet[2683]: E0120 01:36:28.209127 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-x6hg8_calico-system(2652f767-bf33-49f7-b353-182252d33510): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:28.209208 kubelet[2683]: E0120 01:36:28.209178 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:36:28.277471 containerd[1504]: time="2026-01-20T01:36:28.276630838Z" level=info msg="StopPodSandbox for \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\"" Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.348 [INFO][5170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.348 [INFO][5170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" iface="eth0" netns="/var/run/netns/cni-f051c585-504a-951c-c8a5-0e18b9154cd2" Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.350 [INFO][5170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" iface="eth0" netns="/var/run/netns/cni-f051c585-504a-951c-c8a5-0e18b9154cd2" Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.351 [INFO][5170] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" iface="eth0" netns="/var/run/netns/cni-f051c585-504a-951c-c8a5-0e18b9154cd2" Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.352 [INFO][5170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.352 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.386 [INFO][5177] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" HandleID="k8s-pod-network.9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.386 [INFO][5177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.386 [INFO][5177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.397 [WARNING][5177] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" HandleID="k8s-pod-network.9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.397 [INFO][5177] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" HandleID="k8s-pod-network.9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.399 [INFO][5177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:28.403474 containerd[1504]: 2026-01-20 01:36:28.401 [INFO][5170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:36:28.406850 containerd[1504]: time="2026-01-20T01:36:28.406206329Z" level=info msg="TearDown network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\" successfully" Jan 20 01:36:28.406850 containerd[1504]: time="2026-01-20T01:36:28.406270057Z" level=info msg="StopPodSandbox for \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\" returns successfully" Jan 20 01:36:28.410055 systemd[1]: run-netns-cni\x2df051c585\x2d504a\x2d951c\x2dc8a5\x2d0e18b9154cd2.mount: Deactivated successfully. Jan 20 01:36:28.411980 containerd[1504]: time="2026-01-20T01:36:28.411724038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6469cbc-m6w49,Uid:9e8ed20d-7ae4-416a-a5ca-28bbd455038b,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:36:28.595254 systemd-networkd[1413]: calibbf5d12564c: Link UP Jan 20 01:36:28.595585 systemd-networkd[1413]: calibbf5d12564c: Gained carrier Jan 20 01:36:28.599542 systemd-networkd[1413]: caliabe6beaf2a2: Gained IPv6LL Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.489 [INFO][5183] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0 calico-apiserver-c6469cbc- calico-apiserver 9e8ed20d-7ae4-416a-a5ca-28bbd455038b 1089 0 2026-01-20 01:35:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c6469cbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-nmle2.gb1.brightbox.com calico-apiserver-c6469cbc-m6w49 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibbf5d12564c [] [] }} ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-m6w49" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.490 [INFO][5183] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-m6w49" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.529 [INFO][5196] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" HandleID="k8s-pod-network.940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.529 [INFO][5196] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" HandleID="k8s-pod-network.940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-nmle2.gb1.brightbox.com", "pod":"calico-apiserver-c6469cbc-m6w49", "timestamp":"2026-01-20 01:36:28.529519165 +0000 UTC"}, Hostname:"srv-nmle2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.530 [INFO][5196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.530 [INFO][5196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.530 [INFO][5196] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nmle2.gb1.brightbox.com' Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.544 [INFO][5196] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.551 [INFO][5196] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.560 [INFO][5196] ipam/ipam.go 511: Trying affinity for 192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.562 [INFO][5196] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.566 [INFO][5196] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.192/26 host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.566 [INFO][5196] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.192/26 handle="k8s-pod-network.940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.568 [INFO][5196] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797 Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.575 [INFO][5196] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.192/26 handle="k8s-pod-network.940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.583 [INFO][5196] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.200/26] block=192.168.84.192/26 handle="k8s-pod-network.940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.584 [INFO][5196] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.200/26] handle="k8s-pod-network.940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" host="srv-nmle2.gb1.brightbox.com" Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.584 [INFO][5196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:36:28.627573 containerd[1504]: 2026-01-20 01:36:28.584 [INFO][5196] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.200/26] IPv6=[] ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" HandleID="k8s-pod-network.940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:36:28.630358 containerd[1504]: 2026-01-20 01:36:28.588 [INFO][5183] cni-plugin/k8s.go 418: Populated endpoint ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-m6w49" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0", GenerateName:"calico-apiserver-c6469cbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e8ed20d-7ae4-416a-a5ca-28bbd455038b", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6469cbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-c6469cbc-m6w49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbf5d12564c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:28.630358 containerd[1504]: 2026-01-20 01:36:28.588 [INFO][5183] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.200/32] ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-m6w49" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:36:28.630358 containerd[1504]: 2026-01-20 01:36:28.588 [INFO][5183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbf5d12564c ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-m6w49" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:36:28.630358 containerd[1504]: 2026-01-20 01:36:28.596 [INFO][5183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-m6w49" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:36:28.630358 containerd[1504]: 2026-01-20 01:36:28.596 [INFO][5183] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-m6w49" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0", GenerateName:"calico-apiserver-c6469cbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e8ed20d-7ae4-416a-a5ca-28bbd455038b", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6469cbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797", Pod:"calico-apiserver-c6469cbc-m6w49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbf5d12564c", MAC:"32:dc:e1:e3:f5:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:36:28.630358 containerd[1504]: 2026-01-20 01:36:28.622 [INFO][5183] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797" Namespace="calico-apiserver" Pod="calico-apiserver-c6469cbc-m6w49" WorkloadEndpoint="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:36:28.672095 containerd[1504]: time="2026-01-20T01:36:28.670114895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:36:28.673071 containerd[1504]: time="2026-01-20T01:36:28.671421002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:36:28.673071 containerd[1504]: time="2026-01-20T01:36:28.671461315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:28.673071 containerd[1504]: time="2026-01-20T01:36:28.671589666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:36:28.710082 systemd[1]: run-containerd-runc-k8s.io-940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797-runc.TGBPnM.mount: Deactivated successfully. Jan 20 01:36:28.725215 systemd[1]: Started cri-containerd-940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797.scope - libcontainer container 940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797. Jan 20 01:36:28.789480 containerd[1504]: time="2026-01-20T01:36:28.789379987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6469cbc-m6w49,Uid:9e8ed20d-7ae4-416a-a5ca-28bbd455038b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797\"" Jan 20 01:36:28.792490 containerd[1504]: time="2026-01-20T01:36:28.792449968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:36:28.918317 systemd-networkd[1413]: cali31f593cd532: Gained IPv6LL Jan 20 01:36:28.956054 kubelet[2683]: E0120 01:36:28.955196 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:36:28.956054 kubelet[2683]: E0120 01:36:28.955202 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:36:28.956054 kubelet[2683]: E0120 01:36:28.955680 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:36:29.100119 containerd[1504]: time="2026-01-20T01:36:29.100038708Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:29.101365 containerd[1504]: time="2026-01-20T01:36:29.101293043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:36:29.101450 containerd[1504]: time="2026-01-20T01:36:29.101409050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:36:29.102166 kubelet[2683]: E0120 01:36:29.101828 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:36:29.102166 kubelet[2683]: E0120 01:36:29.101901 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:36:29.102166 kubelet[2683]: E0120 01:36:29.102067 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c6469cbc-m6w49_calico-apiserver(9e8ed20d-7ae4-416a-a5ca-28bbd455038b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:29.102504 kubelet[2683]: E0120 01:36:29.102436 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:36:29.686426 systemd-networkd[1413]: calibbf5d12564c: Gained IPv6LL Jan 20 01:36:29.959685 kubelet[2683]: E0120 01:36:29.959359 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:36:32.279562 containerd[1504]: time="2026-01-20T01:36:32.279483103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:36:32.593150 containerd[1504]: time="2026-01-20T01:36:32.592873285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:32.594774 containerd[1504]: time="2026-01-20T01:36:32.594682276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:36:32.594878 containerd[1504]: time="2026-01-20T01:36:32.594815863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:36:32.595292 kubelet[2683]: E0120 01:36:32.595200 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:36:32.595874 kubelet[2683]: E0120 01:36:32.595331 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:36:32.595874 kubelet[2683]: E0120 01:36:32.595511 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b6cd5cfd4-psk5g_calico-system(ea506b49-1ce0-4278-a723-d51ad8fec903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:32.598787 containerd[1504]: time="2026-01-20T01:36:32.598421373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:36:32.911543 containerd[1504]: time="2026-01-20T01:36:32.911482491Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:32.913094 containerd[1504]: time="2026-01-20T01:36:32.913015810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:36:32.913192 containerd[1504]: time="2026-01-20T01:36:32.913138188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:36:32.913544 kubelet[2683]: E0120 01:36:32.913465 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:36:32.913648 kubelet[2683]: E0120 01:36:32.913559 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:36:32.913791 kubelet[2683]: E0120 01:36:32.913715 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b6cd5cfd4-psk5g_calico-system(ea506b49-1ce0-4278-a723-d51ad8fec903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:32.913900 kubelet[2683]: E0120 01:36:32.913838 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b6cd5cfd4-psk5g" podUID="ea506b49-1ce0-4278-a723-d51ad8fec903" Jan 20 01:36:34.278147 containerd[1504]: time="2026-01-20T01:36:34.277699757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:36:34.587536 containerd[1504]: time="2026-01-20T01:36:34.587361285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:34.589036 containerd[1504]: time="2026-01-20T01:36:34.588955214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:36:34.589145 containerd[1504]: time="2026-01-20T01:36:34.589071760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:36:34.589971 kubelet[2683]: E0120 01:36:34.589478 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:36:34.589971 kubelet[2683]: E0120 01:36:34.589543 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:36:34.589971 kubelet[2683]: E0120 01:36:34.589697 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wdqf6_calico-system(fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:34.591734 containerd[1504]: time="2026-01-20T01:36:34.591572567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:36:34.899868 containerd[1504]: time="2026-01-20T01:36:34.899776588Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:34.900964 containerd[1504]: time="2026-01-20T01:36:34.900905944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:36:34.901095 containerd[1504]: time="2026-01-20T01:36:34.901041296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:36:34.902663 kubelet[2683]: E0120 01:36:34.902270 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:36:34.902663 kubelet[2683]: E0120 01:36:34.902560 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:36:34.904742 kubelet[2683]: E0120 01:36:34.903076 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wdqf6_calico-system(fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:34.904742 kubelet[2683]: E0120 01:36:34.903196 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:36:41.278407 containerd[1504]: time="2026-01-20T01:36:41.278277081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:36:41.595636 containerd[1504]: time="2026-01-20T01:36:41.595325785Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:41.597019 containerd[1504]: time="2026-01-20T01:36:41.596925009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:36:41.597798 containerd[1504]: time="2026-01-20T01:36:41.596987271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:36:41.597863 kubelet[2683]: E0120 01:36:41.597372 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:36:41.597863 kubelet[2683]: E0120 01:36:41.597457 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:36:41.597863 kubelet[2683]: E0120 01:36:41.597589 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d65cdbcf4-xqqft_calico-system(708249e2-7049-4ff6-8bf2-b94a10ee1bca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:41.597863 kubelet[2683]: E0120 01:36:41.597641 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:36:42.283102 containerd[1504]: time="2026-01-20T01:36:42.282978302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:36:42.596533 containerd[1504]: time="2026-01-20T01:36:42.595982914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:42.597497 containerd[1504]: time="2026-01-20T01:36:42.597433469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:36:42.597598 containerd[1504]: time="2026-01-20T01:36:42.597534449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:36:42.598673 kubelet[2683]: E0120 01:36:42.597849 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:36:42.598673 kubelet[2683]: E0120 01:36:42.597955 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:36:42.598673 kubelet[2683]: E0120 01:36:42.598077 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-x6hg8_calico-system(2652f767-bf33-49f7-b353-182252d33510): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:42.598673 kubelet[2683]: E0120 01:36:42.598133 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:36:43.277212 containerd[1504]: time="2026-01-20T01:36:43.277156134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:36:43.516411 systemd[1]: Started sshd@16-10.230.15.2:22-20.161.92.111:36592.service - OpenSSH per-connection server daemon (20.161.92.111:36592). Jan 20 01:36:43.591108 containerd[1504]: time="2026-01-20T01:36:43.588891889Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:43.593779 containerd[1504]: time="2026-01-20T01:36:43.593440376Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:36:43.593779 containerd[1504]: time="2026-01-20T01:36:43.593647629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:36:43.594770 kubelet[2683]: E0120 01:36:43.594141 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:36:43.594770 kubelet[2683]: E0120 01:36:43.594215 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:36:43.594770 kubelet[2683]: E0120 01:36:43.594348 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c6469cbc-qrwh4_calico-apiserver(72e0069f-0dfe-458b-8762-abad903cdba3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:43.594770 kubelet[2683]: E0120 01:36:43.594404 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:36:44.214479 sshd[5273]: Accepted publickey for core from 20.161.92.111 port 36592 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:36:44.219503 sshd[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:44.234153 systemd-logind[1481]: New session 12 of user core. Jan 20 01:36:44.245217 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:36:44.282150 kubelet[2683]: E0120 01:36:44.281975 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b6cd5cfd4-psk5g" podUID="ea506b49-1ce0-4278-a723-d51ad8fec903" Jan 20 01:36:45.261251 sshd[5273]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:45.271643 systemd[1]: sshd@16-10.230.15.2:22-20.161.92.111:36592.service: Deactivated successfully. Jan 20 01:36:45.280337 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:36:45.282752 systemd-logind[1481]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:36:45.288056 systemd-logind[1481]: Removed session 12. Jan 20 01:36:45.289486 containerd[1504]: time="2026-01-20T01:36:45.288822175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:36:45.293455 kubelet[2683]: E0120 01:36:45.288201 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:36:45.607246 containerd[1504]: time="2026-01-20T01:36:45.606931188Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:45.608654 containerd[1504]: time="2026-01-20T01:36:45.608537164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:36:45.608789 containerd[1504]: time="2026-01-20T01:36:45.608564535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:36:45.609163 kubelet[2683]: E0120 01:36:45.609079 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:36:45.609864 kubelet[2683]: E0120 01:36:45.609212 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:36:45.609864 kubelet[2683]: E0120 01:36:45.609366 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c6469cbc-m6w49_calico-apiserver(9e8ed20d-7ae4-416a-a5ca-28bbd455038b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:45.609864 kubelet[2683]: E0120 01:36:45.609431 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:36:50.373307 systemd[1]: Started sshd@17-10.230.15.2:22-20.161.92.111:36604.service - OpenSSH per-connection server daemon (20.161.92.111:36604). Jan 20 01:36:51.003382 sshd[5317]: Accepted publickey for core from 20.161.92.111 port 36604 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:36:51.008885 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:51.019987 systemd-logind[1481]: New session 13 of user core. Jan 20 01:36:51.030288 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:36:51.725711 sshd[5317]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:51.733134 systemd[1]: sshd@17-10.230.15.2:22-20.161.92.111:36604.service: Deactivated successfully. Jan 20 01:36:51.736834 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:36:51.738242 systemd-logind[1481]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:36:51.740750 systemd-logind[1481]: Removed session 13. Jan 20 01:36:53.280309 kubelet[2683]: E0120 01:36:53.280098 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:36:54.289011 kubelet[2683]: E0120 01:36:54.288883 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:36:55.278853 containerd[1504]: time="2026-01-20T01:36:55.277887781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:36:55.606271 containerd[1504]: time="2026-01-20T01:36:55.605741151Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:55.607666 containerd[1504]: time="2026-01-20T01:36:55.607552639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:36:55.607841 containerd[1504]: time="2026-01-20T01:36:55.607732076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:36:55.608999 kubelet[2683]: E0120 01:36:55.608172 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:36:55.608999 kubelet[2683]: E0120 01:36:55.608263 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:36:55.608999 kubelet[2683]: E0120 01:36:55.608448 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b6cd5cfd4-psk5g_calico-system(ea506b49-1ce0-4278-a723-d51ad8fec903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:55.611452 containerd[1504]: time="2026-01-20T01:36:55.610808204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:36:55.750309 systemd[1]: Started sshd@18-10.230.15.2:22-134.209.94.87:47060.service - OpenSSH per-connection server daemon (134.209.94.87:47060). Jan 20 01:36:55.981914 containerd[1504]: time="2026-01-20T01:36:55.981669642Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:36:55.983201 containerd[1504]: time="2026-01-20T01:36:55.982974264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:36:55.983201 containerd[1504]: time="2026-01-20T01:36:55.983138346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:36:55.984983 kubelet[2683]: E0120 01:36:55.983884 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:36:55.984983 kubelet[2683]: E0120 01:36:55.984100 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:36:55.984983 kubelet[2683]: E0120 01:36:55.984406 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b6cd5cfd4-psk5g_calico-system(ea506b49-1ce0-4278-a723-d51ad8fec903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:36:55.985216 kubelet[2683]: E0120 01:36:55.984684 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b6cd5cfd4-psk5g" podUID="ea506b49-1ce0-4278-a723-d51ad8fec903" Jan 20 01:36:56.191707 sshd[5330]: Connection closed by authenticating user root 134.209.94.87 port 47060 [preauth] Jan 20 01:36:56.196256 systemd[1]: sshd@18-10.230.15.2:22-134.209.94.87:47060.service: Deactivated successfully. Jan 20 01:36:56.831303 systemd[1]: Started sshd@19-10.230.15.2:22-20.161.92.111:39166.service - OpenSSH per-connection server daemon (20.161.92.111:39166). Jan 20 01:36:57.281164 kubelet[2683]: E0120 01:36:57.280895 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:36:57.405288 sshd[5335]: Accepted publickey for core from 20.161.92.111 port 39166 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:36:57.408512 sshd[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:36:57.418010 systemd-logind[1481]: New session 14 of user core. Jan 20 01:36:57.424200 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:36:57.948854 sshd[5335]: pam_unix(sshd:session): session closed for user core Jan 20 01:36:57.955376 systemd[1]: sshd@19-10.230.15.2:22-20.161.92.111:39166.service: Deactivated successfully. Jan 20 01:36:57.958771 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:36:57.960568 systemd-logind[1481]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:36:57.962235 systemd-logind[1481]: Removed session 14. Jan 20 01:37:00.291344 containerd[1504]: time="2026-01-20T01:37:00.290379810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:37:00.612300 containerd[1504]: time="2026-01-20T01:37:00.611923477Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:37:00.614114 containerd[1504]: time="2026-01-20T01:37:00.613987776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:37:00.614114 containerd[1504]: time="2026-01-20T01:37:00.614039518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:37:00.615483 kubelet[2683]: E0120 01:37:00.614424 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:37:00.615483 kubelet[2683]: E0120 01:37:00.614504 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:37:00.615483 kubelet[2683]: E0120 01:37:00.614661 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wdqf6_calico-system(fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:00.616815 containerd[1504]: time="2026-01-20T01:37:00.616723092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:37:00.934390 containerd[1504]: time="2026-01-20T01:37:00.934266145Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:37:00.935743 containerd[1504]: time="2026-01-20T01:37:00.935672569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:37:00.935916 containerd[1504]: time="2026-01-20T01:37:00.935819321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:37:00.936987 kubelet[2683]: E0120 01:37:00.936550 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:37:00.936987 kubelet[2683]: E0120 01:37:00.936663 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:37:00.936987 kubelet[2683]: E0120 01:37:00.936809 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wdqf6_calico-system(fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:00.937457 kubelet[2683]: E0120 01:37:00.937350 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:37:01.278403 kubelet[2683]: E0120 01:37:01.278099 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:37:03.063151 systemd[1]: Started sshd@20-10.230.15.2:22-20.161.92.111:56070.service - OpenSSH per-connection server daemon (20.161.92.111:56070). Jan 20 01:37:03.676076 sshd[5361]: Accepted publickey for core from 20.161.92.111 port 56070 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:03.678997 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:03.688107 systemd-logind[1481]: New session 15 of user core. Jan 20 01:37:03.695843 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:37:04.202893 sshd[5361]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:04.210012 systemd-logind[1481]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:37:04.210801 systemd[1]: sshd@20-10.230.15.2:22-20.161.92.111:56070.service: Deactivated successfully. Jan 20 01:37:04.214501 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:37:04.216176 systemd-logind[1481]: Removed session 15. Jan 20 01:37:04.311271 systemd[1]: Started sshd@21-10.230.15.2:22-20.161.92.111:56080.service - OpenSSH per-connection server daemon (20.161.92.111:56080). Jan 20 01:37:04.879525 sshd[5375]: Accepted publickey for core from 20.161.92.111 port 56080 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:04.881942 sshd[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:04.888799 systemd-logind[1481]: New session 16 of user core. Jan 20 01:37:04.898250 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:37:05.279286 containerd[1504]: time="2026-01-20T01:37:05.279210067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:37:05.469003 sshd[5375]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:05.477156 systemd[1]: sshd@21-10.230.15.2:22-20.161.92.111:56080.service: Deactivated successfully. Jan 20 01:37:05.481222 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:37:05.483515 systemd-logind[1481]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:37:05.485140 systemd-logind[1481]: Removed session 16. Jan 20 01:37:05.579343 systemd[1]: Started sshd@22-10.230.15.2:22-20.161.92.111:56084.service - OpenSSH per-connection server daemon (20.161.92.111:56084). Jan 20 01:37:05.592733 containerd[1504]: time="2026-01-20T01:37:05.592650953Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:37:05.594551 containerd[1504]: time="2026-01-20T01:37:05.594377279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:37:05.594551 containerd[1504]: time="2026-01-20T01:37:05.594476972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:37:05.594995 kubelet[2683]: E0120 01:37:05.594812 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:37:05.594995 kubelet[2683]: E0120 01:37:05.594904 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:37:05.595768 kubelet[2683]: E0120 01:37:05.595619 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d65cdbcf4-xqqft_calico-system(708249e2-7049-4ff6-8bf2-b94a10ee1bca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:05.595768 kubelet[2683]: E0120 01:37:05.595695 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:37:06.164823 sshd[5385]: Accepted publickey for core from 20.161.92.111 port 56084 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:06.167297 sshd[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:06.174589 systemd-logind[1481]: New session 17 of user core. Jan 20 01:37:06.185229 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:37:06.681483 sshd[5385]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:06.685591 systemd-logind[1481]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:37:06.686106 systemd[1]: sshd@22-10.230.15.2:22-20.161.92.111:56084.service: Deactivated successfully. Jan 20 01:37:06.689732 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:37:06.691971 systemd-logind[1481]: Removed session 17. Jan 20 01:37:08.278142 kubelet[2683]: E0120 01:37:08.277976 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b6cd5cfd4-psk5g" podUID="ea506b49-1ce0-4278-a723-d51ad8fec903" Jan 20 01:37:09.278459 containerd[1504]: time="2026-01-20T01:37:09.278372056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:37:09.588699 containerd[1504]: time="2026-01-20T01:37:09.588214849Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:37:09.590622 containerd[1504]: time="2026-01-20T01:37:09.590567971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:37:09.590740 containerd[1504]: time="2026-01-20T01:37:09.590679124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:37:09.591162 kubelet[2683]: E0120 01:37:09.591074 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:37:09.591846 kubelet[2683]: E0120 01:37:09.591166 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:37:09.591846 kubelet[2683]: E0120 01:37:09.591296 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c6469cbc-qrwh4_calico-apiserver(72e0069f-0dfe-458b-8762-abad903cdba3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:09.591846 kubelet[2683]: E0120 01:37:09.591365 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:37:11.793293 systemd[1]: Started sshd@23-10.230.15.2:22-20.161.92.111:56088.service - OpenSSH per-connection server daemon (20.161.92.111:56088). Jan 20 01:37:12.280455 containerd[1504]: time="2026-01-20T01:37:12.279603243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:37:12.361318 sshd[5406]: Accepted publickey for core from 20.161.92.111 port 56088 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:12.364349 sshd[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:12.378728 systemd-logind[1481]: New session 18 of user core. Jan 20 01:37:12.384164 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:37:12.603541 containerd[1504]: time="2026-01-20T01:37:12.603164071Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:37:12.605337 containerd[1504]: time="2026-01-20T01:37:12.605180001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:37:12.605337 containerd[1504]: time="2026-01-20T01:37:12.605254368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:37:12.605595 kubelet[2683]: E0120 01:37:12.605521 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:37:12.607280 kubelet[2683]: E0120 01:37:12.605628 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:37:12.607280 kubelet[2683]: E0120 01:37:12.606656 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-x6hg8_calico-system(2652f767-bf33-49f7-b353-182252d33510): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:12.607280 kubelet[2683]: E0120 01:37:12.606918 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:37:12.607484 containerd[1504]: time="2026-01-20T01:37:12.606419712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:37:12.871787 sshd[5406]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:12.878845 systemd[1]: sshd@23-10.230.15.2:22-20.161.92.111:56088.service: Deactivated successfully. Jan 20 01:37:12.882173 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:37:12.883448 systemd-logind[1481]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:37:12.885538 systemd-logind[1481]: Removed session 18. Jan 20 01:37:12.929550 containerd[1504]: time="2026-01-20T01:37:12.929454226Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:37:12.930674 containerd[1504]: time="2026-01-20T01:37:12.930623264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:37:12.930790 containerd[1504]: time="2026-01-20T01:37:12.930720322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:37:12.932002 kubelet[2683]: E0120 01:37:12.931223 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:37:12.932002 kubelet[2683]: E0120 01:37:12.931309 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:37:12.932002 kubelet[2683]: E0120 01:37:12.931476 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c6469cbc-m6w49_calico-apiserver(9e8ed20d-7ae4-416a-a5ca-28bbd455038b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:12.932002 kubelet[2683]: E0120 01:37:12.931535 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:37:12.996284 systemd[1]: Started sshd@24-10.230.15.2:22-152.42.141.173:36822.service - OpenSSH per-connection server daemon (152.42.141.173:36822). Jan 20 01:37:14.281714 kubelet[2683]: E0120 01:37:14.281524 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:37:14.411630 sshd[5420]: Connection closed by authenticating user root 152.42.141.173 port 36822 [preauth] Jan 20 01:37:14.415594 systemd[1]: sshd@24-10.230.15.2:22-152.42.141.173:36822.service: Deactivated successfully. Jan 20 01:37:17.277756 kubelet[2683]: E0120 01:37:17.277088 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:37:17.982338 systemd[1]: Started sshd@25-10.230.15.2:22-20.161.92.111:41492.service - OpenSSH per-connection server daemon (20.161.92.111:41492). Jan 20 01:37:18.560617 sshd[5446]: Accepted publickey for core from 20.161.92.111 port 41492 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:18.563480 sshd[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:18.572679 systemd-logind[1481]: New session 19 of user core. Jan 20 01:37:18.584314 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:37:19.101443 sshd[5446]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:19.108047 systemd[1]: sshd@25-10.230.15.2:22-20.161.92.111:41492.service: Deactivated successfully. Jan 20 01:37:19.112842 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:37:19.114572 systemd-logind[1481]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:37:19.116808 systemd-logind[1481]: Removed session 19. Jan 20 01:37:20.278428 kubelet[2683]: E0120 01:37:20.278013 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:37:21.306439 containerd[1504]: time="2026-01-20T01:37:21.306326990Z" level=info msg="StopPodSandbox for \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\"" Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.454 [WARNING][5469] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0", GenerateName:"calico-apiserver-c6469cbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"72e0069f-0dfe-458b-8762-abad903cdba3", ResourceVersion:"1441", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6469cbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a", Pod:"calico-apiserver-c6469cbc-qrwh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e78c8d76f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.454 [INFO][5469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.454 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" iface="eth0" netns="" Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.454 [INFO][5469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.455 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.501 [INFO][5476] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" HandleID="k8s-pod-network.83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.501 [INFO][5476] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.501 [INFO][5476] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.513 [WARNING][5476] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" HandleID="k8s-pod-network.83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.513 [INFO][5476] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" HandleID="k8s-pod-network.83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.516 [INFO][5476] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:37:21.524087 containerd[1504]: 2026-01-20 01:37:21.520 [INFO][5469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:37:21.526765 containerd[1504]: time="2026-01-20T01:37:21.524959186Z" level=info msg="TearDown network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\" successfully" Jan 20 01:37:21.526765 containerd[1504]: time="2026-01-20T01:37:21.525026974Z" level=info msg="StopPodSandbox for \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\" returns successfully" Jan 20 01:37:21.526765 containerd[1504]: time="2026-01-20T01:37:21.525914537Z" level=info msg="RemovePodSandbox for \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\"" Jan 20 01:37:21.526765 containerd[1504]: time="2026-01-20T01:37:21.525994745Z" level=info msg="Forcibly stopping sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\"" Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.583 [WARNING][5490] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0", GenerateName:"calico-apiserver-c6469cbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"72e0069f-0dfe-458b-8762-abad903cdba3", ResourceVersion:"1441", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6469cbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"4bce0ec6b12bacf307c991a114ff823cfff92aacc9b68b89642cd3bfa477f91a", Pod:"calico-apiserver-c6469cbc-qrwh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e78c8d76f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.584 [INFO][5490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.584 [INFO][5490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" iface="eth0" netns="" Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.584 [INFO][5490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.584 [INFO][5490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.631 [INFO][5497] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" HandleID="k8s-pod-network.83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.632 [INFO][5497] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.632 [INFO][5497] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.646 [WARNING][5497] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" HandleID="k8s-pod-network.83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.646 [INFO][5497] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" HandleID="k8s-pod-network.83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--qrwh4-eth0" Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.649 [INFO][5497] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:37:21.654733 containerd[1504]: 2026-01-20 01:37:21.651 [INFO][5490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b" Jan 20 01:37:21.656661 containerd[1504]: time="2026-01-20T01:37:21.655550775Z" level=info msg="TearDown network for sandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\" successfully" Jan 20 01:37:21.673016 containerd[1504]: time="2026-01-20T01:37:21.672821514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:37:21.673016 containerd[1504]: time="2026-01-20T01:37:21.672928010Z" level=info msg="RemovePodSandbox \"83154a9ac48796293e8cecdab3215a5bdb29dbe79324e2488e9c354ec54ee91b\" returns successfully" Jan 20 01:37:21.674980 containerd[1504]: time="2026-01-20T01:37:21.674830434Z" level=info msg="StopPodSandbox for \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\"" Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.743 [WARNING][5511] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"2652f767-bf33-49f7-b353-182252d33510", ResourceVersion:"1392", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1", Pod:"goldmane-7c778bb748-x6hg8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31f593cd532", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.744 [INFO][5511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.744 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" iface="eth0" netns="" Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.744 [INFO][5511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.744 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.782 [INFO][5518] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" HandleID="k8s-pod-network.921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.782 [INFO][5518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.783 [INFO][5518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.799 [WARNING][5518] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" HandleID="k8s-pod-network.921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.799 [INFO][5518] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" HandleID="k8s-pod-network.921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.803 [INFO][5518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:37:21.810229 containerd[1504]: 2026-01-20 01:37:21.807 [INFO][5511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:37:21.811314 containerd[1504]: time="2026-01-20T01:37:21.810368127Z" level=info msg="TearDown network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\" successfully" Jan 20 01:37:21.811314 containerd[1504]: time="2026-01-20T01:37:21.810430423Z" level=info msg="StopPodSandbox for \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\" returns successfully" Jan 20 01:37:21.811658 containerd[1504]: time="2026-01-20T01:37:21.811613420Z" level=info msg="RemovePodSandbox for \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\"" Jan 20 01:37:21.811744 containerd[1504]: time="2026-01-20T01:37:21.811664288Z" level=info msg="Forcibly stopping sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\"" Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.880 [WARNING][5532] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"2652f767-bf33-49f7-b353-182252d33510", ResourceVersion:"1392", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"4e730aff6e0168a4bd3d71cf4c520b6593c1436ff0c9cccf58f1f7fcbefd19e1", Pod:"goldmane-7c778bb748-x6hg8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31f593cd532", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.881 [INFO][5532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.881 [INFO][5532] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" iface="eth0" netns="" Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.881 [INFO][5532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.881 [INFO][5532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.914 [INFO][5539] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" HandleID="k8s-pod-network.921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.914 [INFO][5539] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.915 [INFO][5539] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.926 [WARNING][5539] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" HandleID="k8s-pod-network.921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.926 [INFO][5539] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" HandleID="k8s-pod-network.921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Workload="srv--nmle2.gb1.brightbox.com-k8s-goldmane--7c778bb748--x6hg8-eth0" Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.928 [INFO][5539] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:37:21.933550 containerd[1504]: 2026-01-20 01:37:21.930 [INFO][5532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c" Jan 20 01:37:21.933550 containerd[1504]: time="2026-01-20T01:37:21.933463922Z" level=info msg="TearDown network for sandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\" successfully" Jan 20 01:37:21.938515 containerd[1504]: time="2026-01-20T01:37:21.938459033Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:37:21.938600 containerd[1504]: time="2026-01-20T01:37:21.938549760Z" level=info msg="RemovePodSandbox \"921561e5b443062d1a0aae0cc1dc09a06035bc19ce3595bd6c873c133277fd3c\" returns successfully" Jan 20 01:37:21.939429 containerd[1504]: time="2026-01-20T01:37:21.939394082Z" level=info msg="StopPodSandbox for \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\"" Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:21.997 [WARNING][5553] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0", GenerateName:"calico-kube-controllers-7d65cdbcf4-", Namespace:"calico-system", SelfLink:"", UID:"708249e2-7049-4ff6-8bf2-b94a10ee1bca", ResourceVersion:"1424", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d65cdbcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c", Pod:"calico-kube-controllers-7d65cdbcf4-xqqft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliabe6beaf2a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:21.998 [INFO][5553] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:21.998 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" iface="eth0" netns="" Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:21.998 [INFO][5553] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:21.998 [INFO][5553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:22.047 [INFO][5560] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" HandleID="k8s-pod-network.01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:22.048 [INFO][5560] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:22.048 [INFO][5560] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:22.062 [WARNING][5560] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" HandleID="k8s-pod-network.01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:22.062 [INFO][5560] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" HandleID="k8s-pod-network.01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:22.064 [INFO][5560] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:37:22.070079 containerd[1504]: 2026-01-20 01:37:22.067 [INFO][5553] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:37:22.073684 containerd[1504]: time="2026-01-20T01:37:22.070779123Z" level=info msg="TearDown network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\" successfully" Jan 20 01:37:22.073684 containerd[1504]: time="2026-01-20T01:37:22.070876406Z" level=info msg="StopPodSandbox for \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\" returns successfully" Jan 20 01:37:22.073684 containerd[1504]: time="2026-01-20T01:37:22.072721960Z" level=info msg="RemovePodSandbox for \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\"" Jan 20 01:37:22.073684 containerd[1504]: time="2026-01-20T01:37:22.072762160Z" level=info msg="Forcibly stopping sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\"" Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.137 [WARNING][5574] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0", GenerateName:"calico-kube-controllers-7d65cdbcf4-", Namespace:"calico-system", SelfLink:"", UID:"708249e2-7049-4ff6-8bf2-b94a10ee1bca", ResourceVersion:"1424", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d65cdbcf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"d5ac216078ba5c9addc880a7d6fb2eb406e1256d2d7d6eaa7740e2b26df4a90c", Pod:"calico-kube-controllers-7d65cdbcf4-xqqft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliabe6beaf2a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.137 [INFO][5574] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.137 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" iface="eth0" netns="" Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.138 [INFO][5574] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.138 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.194 [INFO][5581] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" HandleID="k8s-pod-network.01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.194 [INFO][5581] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.194 [INFO][5581] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.206 [WARNING][5581] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" HandleID="k8s-pod-network.01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.206 [INFO][5581] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" HandleID="k8s-pod-network.01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--kube--controllers--7d65cdbcf4--xqqft-eth0" Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.209 [INFO][5581] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:37:22.215423 containerd[1504]: 2026-01-20 01:37:22.212 [INFO][5574] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0" Jan 20 01:37:22.215423 containerd[1504]: time="2026-01-20T01:37:22.214749235Z" level=info msg="TearDown network for sandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\" successfully" Jan 20 01:37:22.224069 containerd[1504]: time="2026-01-20T01:37:22.224005939Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:37:22.224596 containerd[1504]: time="2026-01-20T01:37:22.224101868Z" level=info msg="RemovePodSandbox \"01cbe37a3e8aa23e8ace3f46109417370ff1e8c437a888efacc9f52a5d1739f0\" returns successfully" Jan 20 01:37:22.232495 containerd[1504]: time="2026-01-20T01:37:22.232434684Z" level=info msg="StopPodSandbox for \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\"" Jan 20 01:37:22.284612 kubelet[2683]: E0120 01:37:22.284523 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b6cd5cfd4-psk5g" podUID="ea506b49-1ce0-4278-a723-d51ad8fec903" Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.325 [WARNING][5595] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0", GenerateName:"calico-apiserver-c6469cbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e8ed20d-7ae4-416a-a5ca-28bbd455038b", ResourceVersion:"1393", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6469cbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797", Pod:"calico-apiserver-c6469cbc-m6w49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbf5d12564c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.326 [INFO][5595] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.326 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" iface="eth0" netns="" Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.326 [INFO][5595] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.326 [INFO][5595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.357 [INFO][5603] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" HandleID="k8s-pod-network.9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.357 [INFO][5603] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.357 [INFO][5603] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.367 [WARNING][5603] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" HandleID="k8s-pod-network.9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.367 [INFO][5603] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" HandleID="k8s-pod-network.9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.369 [INFO][5603] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:37:22.374616 containerd[1504]: 2026-01-20 01:37:22.371 [INFO][5595] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:37:22.376058 containerd[1504]: time="2026-01-20T01:37:22.374731278Z" level=info msg="TearDown network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\" successfully" Jan 20 01:37:22.376058 containerd[1504]: time="2026-01-20T01:37:22.374797203Z" level=info msg="StopPodSandbox for \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\" returns successfully" Jan 20 01:37:22.377061 containerd[1504]: time="2026-01-20T01:37:22.377019421Z" level=info msg="RemovePodSandbox for \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\"" Jan 20 01:37:22.377138 containerd[1504]: time="2026-01-20T01:37:22.377087535Z" level=info msg="Forcibly stopping sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\"" Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.431 [WARNING][5617] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0", GenerateName:"calico-apiserver-c6469cbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e8ed20d-7ae4-416a-a5ca-28bbd455038b", ResourceVersion:"1393", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 35, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6469cbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nmle2.gb1.brightbox.com", ContainerID:"940512bbde73f99fad09843904aa1a9492b804254c3128b8ab55e78a9f03b797", Pod:"calico-apiserver-c6469cbc-m6w49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbf5d12564c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.434 [INFO][5617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.434 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" iface="eth0" netns="" Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.434 [INFO][5617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.434 [INFO][5617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.476 [INFO][5624] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" HandleID="k8s-pod-network.9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.477 [INFO][5624] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.477 [INFO][5624] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.487 [WARNING][5624] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" HandleID="k8s-pod-network.9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.487 [INFO][5624] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" HandleID="k8s-pod-network.9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Workload="srv--nmle2.gb1.brightbox.com-k8s-calico--apiserver--c6469cbc--m6w49-eth0" Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.491 [INFO][5624] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:37:22.496667 containerd[1504]: 2026-01-20 01:37:22.493 [INFO][5617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231" Jan 20 01:37:22.496667 containerd[1504]: time="2026-01-20T01:37:22.496358620Z" level=info msg="TearDown network for sandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\" successfully" Jan 20 01:37:22.546164 containerd[1504]: time="2026-01-20T01:37:22.544892436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:37:22.546164 containerd[1504]: time="2026-01-20T01:37:22.545318276Z" level=info msg="RemovePodSandbox \"9804b82c11f386c52fa7114129fc473f161288ede0be68b224e6815fa60e3231\" returns successfully" Jan 20 01:37:24.211569 systemd[1]: Started sshd@26-10.230.15.2:22-20.161.92.111:42286.service - OpenSSH per-connection server daemon (20.161.92.111:42286). Jan 20 01:37:24.855138 sshd[5631]: Accepted publickey for core from 20.161.92.111 port 42286 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:24.857916 sshd[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:24.868428 systemd-logind[1481]: New session 20 of user core. Jan 20 01:37:24.877198 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:37:25.278016 kubelet[2683]: E0120 01:37:25.277552 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:37:25.476614 sshd[5631]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:25.484109 systemd[1]: sshd@26-10.230.15.2:22-20.161.92.111:42286.service: Deactivated successfully. Jan 20 01:37:25.491419 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:37:25.498400 systemd-logind[1481]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:37:25.523666 systemd[1]: Started sshd@27-10.230.15.2:22-134.209.94.87:47310.service - OpenSSH per-connection server daemon (134.209.94.87:47310). Jan 20 01:37:25.525983 systemd-logind[1481]: Removed session 20. Jan 20 01:37:25.659426 sshd[5644]: Connection closed by authenticating user root 134.209.94.87 port 47310 [preauth] Jan 20 01:37:25.661690 systemd[1]: sshd@27-10.230.15.2:22-134.209.94.87:47310.service: Deactivated successfully. Jan 20 01:37:26.279398 kubelet[2683]: E0120 01:37:26.278649 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:37:26.279398 kubelet[2683]: E0120 01:37:26.278816 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:37:30.585362 systemd[1]: Started sshd@28-10.230.15.2:22-20.161.92.111:42296.service - OpenSSH per-connection server daemon (20.161.92.111:42296). Jan 20 01:37:31.161981 sshd[5654]: Accepted publickey for core from 20.161.92.111 port 42296 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:31.165392 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:31.176072 systemd-logind[1481]: New session 21 of user core. Jan 20 01:37:31.182247 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:37:31.672502 sshd[5654]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:31.678369 systemd[1]: sshd@28-10.230.15.2:22-20.161.92.111:42296.service: Deactivated successfully. Jan 20 01:37:31.683094 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:37:31.685794 systemd-logind[1481]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:37:31.687751 systemd-logind[1481]: Removed session 21. Jan 20 01:37:31.778370 systemd[1]: Started sshd@29-10.230.15.2:22-20.161.92.111:42304.service - OpenSSH per-connection server daemon (20.161.92.111:42304). Jan 20 01:37:32.281289 kubelet[2683]: E0120 01:37:32.281011 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:37:32.355086 sshd[5667]: Accepted publickey for core from 20.161.92.111 port 42304 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:32.358032 sshd[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:32.367739 systemd-logind[1481]: New session 22 of user core. Jan 20 01:37:32.377296 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:37:33.306329 sshd[5667]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:33.314251 systemd[1]: sshd@29-10.230.15.2:22-20.161.92.111:42304.service: Deactivated successfully. Jan 20 01:37:33.318221 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:37:33.319482 systemd-logind[1481]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:37:33.322161 systemd-logind[1481]: Removed session 22. Jan 20 01:37:33.410447 systemd[1]: Started sshd@30-10.230.15.2:22-20.161.92.111:47286.service - OpenSSH per-connection server daemon (20.161.92.111:47286). Jan 20 01:37:34.034608 sshd[5678]: Accepted publickey for core from 20.161.92.111 port 47286 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:34.037278 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:34.044989 systemd-logind[1481]: New session 23 of user core. Jan 20 01:37:34.049280 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:37:34.279371 kubelet[2683]: E0120 01:37:34.278666 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:37:35.288115 kubelet[2683]: E0120 01:37:35.287887 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b6cd5cfd4-psk5g" podUID="ea506b49-1ce0-4278-a723-d51ad8fec903" Jan 20 01:37:35.408515 sshd[5678]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:35.415808 systemd[1]: sshd@30-10.230.15.2:22-20.161.92.111:47286.service: Deactivated successfully. Jan 20 01:37:35.421198 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:37:35.423913 systemd-logind[1481]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:37:35.425776 systemd-logind[1481]: Removed session 23. Jan 20 01:37:35.520372 systemd[1]: Started sshd@31-10.230.15.2:22-20.161.92.111:47296.service - OpenSSH per-connection server daemon (20.161.92.111:47296). Jan 20 01:37:36.117801 sshd[5694]: Accepted publickey for core from 20.161.92.111 port 47296 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:36.120425 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:36.128531 systemd-logind[1481]: New session 24 of user core. Jan 20 01:37:36.134159 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:37:36.989255 sshd[5694]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:36.996273 systemd-logind[1481]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:37:36.997636 systemd[1]: sshd@31-10.230.15.2:22-20.161.92.111:47296.service: Deactivated successfully. Jan 20 01:37:37.001197 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:37:37.004099 systemd-logind[1481]: Removed session 24. Jan 20 01:37:37.102545 systemd[1]: Started sshd@32-10.230.15.2:22-20.161.92.111:47312.service - OpenSSH per-connection server daemon (20.161.92.111:47312). Jan 20 01:37:37.703080 sshd[5707]: Accepted publickey for core from 20.161.92.111 port 47312 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:37.706123 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:37.714865 systemd-logind[1481]: New session 25 of user core. Jan 20 01:37:37.720447 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:37:38.211152 sshd[5707]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:38.217327 systemd[1]: sshd@32-10.230.15.2:22-20.161.92.111:47312.service: Deactivated successfully. Jan 20 01:37:38.220243 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:37:38.221402 systemd-logind[1481]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:37:38.224381 systemd-logind[1481]: Removed session 25. Jan 20 01:37:38.279983 kubelet[2683]: E0120 01:37:38.279869 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:37:38.281893 kubelet[2683]: E0120 01:37:38.280998 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:37:40.294409 kubelet[2683]: E0120 01:37:40.294266 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wdqf6" podUID="fbc3977f-2a7c-42f2-a24b-94a3c5a0bac9" Jan 20 01:37:43.325103 systemd[1]: Started sshd@33-10.230.15.2:22-20.161.92.111:33226.service - OpenSSH per-connection server daemon (20.161.92.111:33226). Jan 20 01:37:43.905657 sshd[5725]: Accepted publickey for core from 20.161.92.111 port 33226 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:43.905402 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:43.918124 systemd-logind[1481]: New session 26 of user core. Jan 20 01:37:43.923591 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:37:44.283917 kubelet[2683]: E0120 01:37:44.283571 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d65cdbcf4-xqqft" podUID="708249e2-7049-4ff6-8bf2-b94a10ee1bca" Jan 20 01:37:44.500896 sshd[5725]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:44.520624 systemd-logind[1481]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:37:44.522354 systemd[1]: sshd@33-10.230.15.2:22-20.161.92.111:33226.service: Deactivated successfully. Jan 20 01:37:44.531880 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:37:44.535412 systemd-logind[1481]: Removed session 26. Jan 20 01:37:47.277995 kubelet[2683]: E0120 01:37:47.277898 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-qrwh4" podUID="72e0069f-0dfe-458b-8762-abad903cdba3" Jan 20 01:37:48.302746 containerd[1504]: time="2026-01-20T01:37:48.302023595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:37:48.665760 containerd[1504]: time="2026-01-20T01:37:48.665232056Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:37:48.668117 containerd[1504]: time="2026-01-20T01:37:48.667977566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:37:48.668672 containerd[1504]: time="2026-01-20T01:37:48.668351508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:37:48.669054 kubelet[2683]: E0120 01:37:48.668903 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:37:48.670408 kubelet[2683]: E0120 01:37:48.669073 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:37:48.689558 kubelet[2683]: E0120 01:37:48.689459 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b6cd5cfd4-psk5g_calico-system(ea506b49-1ce0-4278-a723-d51ad8fec903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:48.694237 containerd[1504]: time="2026-01-20T01:37:48.693806949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:37:49.030313 containerd[1504]: time="2026-01-20T01:37:49.029863273Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:37:49.031297 containerd[1504]: time="2026-01-20T01:37:49.031203044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:37:49.031640 containerd[1504]: time="2026-01-20T01:37:49.031261086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:37:49.032445 kubelet[2683]: E0120 01:37:49.032386 2683 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:37:49.032633 kubelet[2683]: E0120 01:37:49.032462 2683 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:37:49.032633 kubelet[2683]: E0120 01:37:49.032604 2683 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b6cd5cfd4-psk5g_calico-system(ea506b49-1ce0-4278-a723-d51ad8fec903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:37:49.034748 kubelet[2683]: E0120 01:37:49.032687 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b6cd5cfd4-psk5g" podUID="ea506b49-1ce0-4278-a723-d51ad8fec903" Jan 20 01:37:49.611634 systemd[1]: Started sshd@34-10.230.15.2:22-20.161.92.111:33242.service - OpenSSH per-connection server daemon (20.161.92.111:33242). Jan 20 01:37:50.303077 sshd[5764]: Accepted publickey for core from 20.161.92.111 port 33242 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:37:50.317670 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:37:50.338685 systemd-logind[1481]: New session 27 of user core. Jan 20 01:37:50.348825 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 01:37:51.162598 sshd[5764]: pam_unix(sshd:session): session closed for user core Jan 20 01:37:51.173765 systemd[1]: sshd@34-10.230.15.2:22-20.161.92.111:33242.service: Deactivated successfully. Jan 20 01:37:51.180869 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 01:37:51.186203 systemd-logind[1481]: Session 27 logged out. Waiting for processes to exit. Jan 20 01:37:51.189460 systemd-logind[1481]: Removed session 27. Jan 20 01:37:51.314130 kubelet[2683]: E0120 01:37:51.313982 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-x6hg8" podUID="2652f767-bf33-49f7-b353-182252d33510" Jan 20 01:37:52.279137 kubelet[2683]: E0120 01:37:52.279071 2683 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c6469cbc-m6w49" podUID="9e8ed20d-7ae4-416a-a5ca-28bbd455038b" Jan 20 01:37:54.589644 systemd[1]: Started sshd@35-10.230.15.2:22-134.209.94.87:40248.service - OpenSSH per-connection server daemon (134.209.94.87:40248). Jan 20 01:37:54.761908 sshd[5784]: Connection closed by authenticating user root 134.209.94.87 port 40248 [preauth] Jan 20 01:37:54.764830 systemd[1]: sshd@35-10.230.15.2:22-134.209.94.87:40248.service: Deactivated successfully.