Jan 28 02:29:08.023879 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 02:29:08.023926 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 02:29:08.023941 kernel: BIOS-provided physical RAM map: Jan 28 02:29:08.023957 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 02:29:08.023967 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 02:29:08.023978 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 02:29:08.023989 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 28 02:29:08.024000 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 28 02:29:08.024010 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 02:29:08.024020 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 02:29:08.024030 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 02:29:08.024040 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 02:29:08.024056 kernel: NX (Execute Disable) protection: active Jan 28 02:29:08.024067 kernel: APIC: Static calls initialized Jan 28 02:29:08.024079 kernel: SMBIOS 2.8 present. Jan 28 02:29:08.024091 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 28 02:29:08.024102 kernel: Hypervisor detected: KVM Jan 28 02:29:08.024118 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 02:29:08.024130 kernel: kvm-clock: using sched offset of 4486972444 cycles Jan 28 02:29:08.024142 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 02:29:08.024154 kernel: tsc: Detected 2499.998 MHz processor Jan 28 02:29:08.024166 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 02:29:08.024177 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 02:29:08.024189 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 28 02:29:08.024200 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 02:29:08.024212 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 02:29:08.024228 kernel: Using GB pages for direct mapping Jan 28 02:29:08.024239 kernel: ACPI: Early table checksum verification disabled Jan 28 02:29:08.024251 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 28 02:29:08.024262 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:29:08.024274 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:29:08.024285 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:29:08.024297 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 28 02:29:08.024308 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:29:08.024319 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:29:08.024336 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:29:08.024347 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:29:08.024359 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 28 02:29:08.024370 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 28 02:29:08.024382 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 28 02:29:08.024400 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 28 02:29:08.024412 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 28 02:29:08.024429 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 28 02:29:08.024441 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 28 02:29:08.024453 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 28 02:29:08.024465 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 28 02:29:08.024477 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 28 02:29:08.024489 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 28 02:29:08.024501 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 28 02:29:08.024518 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 28 02:29:08.024530 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 28 02:29:08.024542 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 28 02:29:08.024554 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 28 02:29:08.024565 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 28 02:29:08.025643 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 28 02:29:08.025661 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 28 02:29:08.025674 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 28 02:29:08.025686 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 28 02:29:08.025698 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 28 02:29:08.025717 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 28 02:29:08.025730 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 28 02:29:08.025742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 28 02:29:08.025754 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 28 02:29:08.025766 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 28 02:29:08.025779 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 28 02:29:08.025791 kernel: Zone ranges: Jan 28 02:29:08.025803 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 02:29:08.025815 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 28 02:29:08.025833 kernel: Normal empty Jan 28 02:29:08.025845 kernel: Movable zone start for each node Jan 28 02:29:08.025857 kernel: Early memory node ranges Jan 28 02:29:08.025869 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 02:29:08.025881 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 28 02:29:08.025893 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 28 02:29:08.025905 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 02:29:08.025928 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 02:29:08.025941 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 28 02:29:08.025953 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 02:29:08.025971 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 02:29:08.025984 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 02:29:08.025996 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 02:29:08.026008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 02:29:08.026020 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 02:29:08.026032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 02:29:08.026044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 02:29:08.026056 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 02:29:08.026068 kernel: TSC deadline timer available Jan 28 02:29:08.026086 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 28 02:29:08.026098 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 02:29:08.026110 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 02:29:08.026122 kernel: Booting paravirtualized kernel on KVM Jan 28 02:29:08.026134 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 02:29:08.026147 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 28 02:29:08.026159 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 28 02:29:08.026171 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 28 02:29:08.026183 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 28 02:29:08.026200 kernel: kvm-guest: PV spinlocks enabled Jan 28 02:29:08.026213 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 02:29:08.026226 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 02:29:08.026239 kernel: random: crng init done Jan 28 02:29:08.026251 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 02:29:08.026263 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 28 02:29:08.026275 kernel: Fallback order for Node 0: 0 Jan 28 02:29:08.026287 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 28 02:29:08.026304 kernel: Policy zone: DMA32 Jan 28 02:29:08.026316 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 02:29:08.026328 kernel: software IO TLB: area num 16. Jan 28 02:29:08.026341 kernel: Memory: 1901588K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194768K reserved, 0K cma-reserved) Jan 28 02:29:08.026353 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 28 02:29:08.026365 kernel: Kernel/User page tables isolation: enabled Jan 28 02:29:08.026377 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 02:29:08.026389 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 02:29:08.026401 kernel: Dynamic Preempt: voluntary Jan 28 02:29:08.026424 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 02:29:08.026437 kernel: rcu: RCU event tracing is enabled. Jan 28 02:29:08.026450 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 28 02:29:08.026462 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 02:29:08.026474 kernel: Rude variant of Tasks RCU enabled. Jan 28 02:29:08.026499 kernel: Tracing variant of Tasks RCU enabled. Jan 28 02:29:08.026516 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 02:29:08.026529 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 28 02:29:08.026542 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 28 02:29:08.026554 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 02:29:08.026567 kernel: Console: colour VGA+ 80x25 Jan 28 02:29:08.026625 kernel: printk: console [tty0] enabled Jan 28 02:29:08.026647 kernel: printk: console [ttyS0] enabled Jan 28 02:29:08.026660 kernel: ACPI: Core revision 20230628 Jan 28 02:29:08.026672 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 02:29:08.026685 kernel: x2apic enabled Jan 28 02:29:08.026698 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 02:29:08.026716 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 28 02:29:08.026729 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 28 02:29:08.026741 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 02:29:08.026754 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 28 02:29:08.026766 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 28 02:29:08.026779 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 02:29:08.026791 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 02:29:08.026804 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 02:29:08.026817 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 28 02:29:08.026829 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 28 02:29:08.026847 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 28 02:29:08.026860 kernel: MDS: Mitigation: Clear CPU buffers Jan 28 02:29:08.026872 kernel: MMIO Stale Data: Unknown: No mitigations Jan 28 02:29:08.026884 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 28 02:29:08.026897 kernel: active return thunk: its_return_thunk Jan 28 02:29:08.026909 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 28 02:29:08.026933 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 02:29:08.026946 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 02:29:08.026959 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 02:29:08.026971 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 02:29:08.026984 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 28 02:29:08.027003 kernel: Freeing SMP alternatives memory: 32K Jan 28 02:29:08.027015 kernel: pid_max: default: 32768 minimum: 301 Jan 28 02:29:08.027028 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 02:29:08.027040 kernel: landlock: Up and running. Jan 28 02:29:08.027053 kernel: SELinux: Initializing. Jan 28 02:29:08.027065 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 02:29:08.027077 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 02:29:08.027090 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 28 02:29:08.027103 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 02:29:08.027115 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 02:29:08.027134 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 02:29:08.027147 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 28 02:29:08.027160 kernel: signal: max sigframe size: 1776 Jan 28 02:29:08.027172 kernel: rcu: Hierarchical SRCU implementation. Jan 28 02:29:08.027185 kernel: rcu: Max phase no-delay instances is 400. Jan 28 02:29:08.027198 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 02:29:08.027211 kernel: smp: Bringing up secondary CPUs ... Jan 28 02:29:08.027223 kernel: smpboot: x86: Booting SMP configuration: Jan 28 02:29:08.027236 kernel: .... node #0, CPUs: #1 Jan 28 02:29:08.027254 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 28 02:29:08.027267 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 02:29:08.027279 kernel: smpboot: Max logical packages: 16 Jan 28 02:29:08.027292 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 28 02:29:08.027305 kernel: devtmpfs: initialized Jan 28 02:29:08.027317 kernel: x86/mm: Memory block size: 128MB Jan 28 02:29:08.027330 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 02:29:08.027343 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 28 02:29:08.027356 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 02:29:08.027368 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 02:29:08.027386 kernel: audit: initializing netlink subsys (disabled) Jan 28 02:29:08.027399 kernel: audit: type=2000 audit(1769567346.716:1): state=initialized audit_enabled=0 res=1 Jan 28 02:29:08.027412 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 02:29:08.027424 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 02:29:08.027437 kernel: cpuidle: using governor menu Jan 28 02:29:08.027450 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 02:29:08.027462 kernel: dca service started, version 1.12.1 Jan 28 02:29:08.027475 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 02:29:08.027493 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 02:29:08.027506 kernel: PCI: Using configuration type 1 for base access Jan 28 02:29:08.027519 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 02:29:08.027532 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 02:29:08.027545 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 02:29:08.027558 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 02:29:08.027570 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 02:29:08.028621 kernel: ACPI: Added _OSI(Module Device) Jan 28 02:29:08.028637 kernel: ACPI: Added _OSI(Processor Device) Jan 28 02:29:08.028658 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 02:29:08.028671 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 02:29:08.028685 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 02:29:08.028698 kernel: ACPI: Interpreter enabled Jan 28 02:29:08.028710 kernel: ACPI: PM: (supports S0 S5) Jan 28 02:29:08.028723 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 02:29:08.028736 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 02:29:08.028749 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 02:29:08.028762 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 02:29:08.028775 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 02:29:08.029080 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 02:29:08.029270 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 28 02:29:08.029449 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 28 02:29:08.029477 kernel: PCI host bridge to bus 0000:00 Jan 28 02:29:08.029676 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 02:29:08.029841 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 02:29:08.030026 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 02:29:08.030185 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 28 02:29:08.030342 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 02:29:08.030499 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 28 02:29:08.030689 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 02:29:08.030904 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 02:29:08.031138 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 28 02:29:08.033754 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 28 02:29:08.033961 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 28 02:29:08.034141 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 28 02:29:08.034316 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 02:29:08.034534 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 28 02:29:08.036354 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 28 02:29:08.036614 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 28 02:29:08.036802 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 28 02:29:08.037013 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 28 02:29:08.037190 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 28 02:29:08.037382 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 28 02:29:08.037556 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 28 02:29:08.039781 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 28 02:29:08.039981 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 28 02:29:08.040170 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 28 02:29:08.040348 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 28 02:29:08.040531 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 28 02:29:08.040725 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 28 02:29:08.040934 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 28 02:29:08.041113 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 28 02:29:08.041300 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 28 02:29:08.041476 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 28 02:29:08.042611 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 28 02:29:08.042790 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 28 02:29:08.042977 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 28 02:29:08.043185 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 28 02:29:08.043358 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 28 02:29:08.043527 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 28 02:29:08.045793 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 28 02:29:08.046008 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 02:29:08.046216 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 02:29:08.048817 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 02:29:08.049039 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 28 02:29:08.049214 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 28 02:29:08.049399 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 02:29:08.049570 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 28 02:29:08.049815 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 28 02:29:08.050012 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 28 02:29:08.051647 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 02:29:08.051894 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 02:29:08.052084 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 02:29:08.052279 kernel: pci_bus 0000:02: extended config space not accessible Jan 28 02:29:08.052490 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 28 02:29:08.053735 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 28 02:29:08.053941 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 02:29:08.054121 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 02:29:08.054310 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 28 02:29:08.054486 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 28 02:29:08.055738 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 02:29:08.055923 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 02:29:08.056098 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 02:29:08.056301 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 28 02:29:08.056495 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 28 02:29:08.057717 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 02:29:08.057890 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 02:29:08.058077 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 02:29:08.058254 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 02:29:08.058423 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 02:29:08.060629 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 02:29:08.060820 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 02:29:08.061008 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 02:29:08.061180 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 02:29:08.061353 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 02:29:08.061523 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 02:29:08.061742 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 02:29:08.061925 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 02:29:08.062099 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 02:29:08.062281 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 02:29:08.062454 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 02:29:08.064663 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 02:29:08.064841 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 02:29:08.064861 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 02:29:08.064875 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 02:29:08.064889 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 02:29:08.064902 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 02:29:08.064927 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 02:29:08.064950 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 02:29:08.064963 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 02:29:08.064976 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 02:29:08.064989 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 02:29:08.065002 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 02:29:08.065015 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 02:29:08.065028 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 02:29:08.065041 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 02:29:08.065054 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 02:29:08.065073 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 02:29:08.065085 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 02:29:08.065098 kernel: iommu: Default domain type: Translated Jan 28 02:29:08.065111 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 02:29:08.065124 kernel: PCI: Using ACPI for IRQ routing Jan 28 02:29:08.065137 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 02:29:08.065150 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 02:29:08.065163 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 28 02:29:08.065337 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 02:29:08.065520 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 02:29:08.065747 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 02:29:08.065768 kernel: vgaarb: loaded Jan 28 02:29:08.065782 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 02:29:08.065795 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 02:29:08.065808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 02:29:08.065821 kernel: pnp: PnP ACPI init Jan 28 02:29:08.066013 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 02:29:08.066043 kernel: pnp: PnP ACPI: found 5 devices Jan 28 02:29:08.066056 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 02:29:08.066070 kernel: NET: Registered PF_INET protocol family Jan 28 02:29:08.066083 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 02:29:08.066096 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 28 02:29:08.066109 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 02:29:08.066122 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 28 02:29:08.066135 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 28 02:29:08.066153 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 28 02:29:08.066166 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 02:29:08.066179 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 02:29:08.066192 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 02:29:08.066205 kernel: NET: Registered PF_XDP protocol family Jan 28 02:29:08.066373 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 28 02:29:08.066545 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 28 02:29:08.067764 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 28 02:29:08.067965 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 28 02:29:08.068139 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 28 02:29:08.068311 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 28 02:29:08.068481 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 28 02:29:08.068676 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 28 02:29:08.068847 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 28 02:29:08.069040 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 28 02:29:08.069210 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 28 02:29:08.069384 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 28 02:29:08.069557 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 28 02:29:08.071777 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 28 02:29:08.071962 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 28 02:29:08.072133 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 28 02:29:08.072312 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 02:29:08.072525 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 02:29:08.073735 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 02:29:08.073919 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 28 02:29:08.074095 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 02:29:08.074266 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 02:29:08.074437 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 02:29:08.076639 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 28 02:29:08.076817 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 02:29:08.077004 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 02:29:08.077188 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 02:29:08.077358 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 28 02:29:08.077529 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 02:29:08.077726 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 02:29:08.077899 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 02:29:08.078095 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 28 02:29:08.078270 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 02:29:08.078442 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 02:29:08.079668 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 02:29:08.079852 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 28 02:29:08.080044 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 02:29:08.080232 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 02:29:08.080419 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 02:29:08.081665 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 28 02:29:08.081982 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 02:29:08.082180 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 02:29:08.082459 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 02:29:08.083705 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 28 02:29:08.083880 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 02:29:08.084074 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 02:29:08.084247 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 02:29:08.084417 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 28 02:29:08.085632 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 02:29:08.085810 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 02:29:08.085987 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 02:29:08.086154 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 02:29:08.086308 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 02:29:08.086462 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 28 02:29:08.088662 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 02:29:08.088822 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 28 02:29:08.089014 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 28 02:29:08.089178 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 28 02:29:08.089339 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 02:29:08.089514 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 28 02:29:08.089721 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 28 02:29:08.089897 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 28 02:29:08.090075 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 02:29:08.090252 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 28 02:29:08.090416 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 28 02:29:08.090596 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 02:29:08.090775 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 28 02:29:08.090964 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 28 02:29:08.091129 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 02:29:08.091316 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 28 02:29:08.091481 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 28 02:29:08.091664 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 02:29:08.091851 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 28 02:29:08.092032 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 28 02:29:08.092206 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 02:29:08.092381 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 28 02:29:08.092551 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 28 02:29:08.092746 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 02:29:08.092936 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 28 02:29:08.093108 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 28 02:29:08.093274 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 02:29:08.093303 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 02:29:08.093318 kernel: PCI: CLS 0 bytes, default 64 Jan 28 02:29:08.093332 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 28 02:29:08.093345 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 28 02:29:08.093359 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 28 02:29:08.093373 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 28 02:29:08.093387 kernel: Initialise system trusted keyrings Jan 28 02:29:08.093401 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 28 02:29:08.093419 kernel: Key type asymmetric registered Jan 28 02:29:08.093433 kernel: Asymmetric key parser 'x509' registered Jan 28 02:29:08.093446 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 02:29:08.093460 kernel: io scheduler mq-deadline registered Jan 28 02:29:08.093473 kernel: io scheduler kyber registered Jan 28 02:29:08.093486 kernel: io scheduler bfq registered Jan 28 02:29:08.093693 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 28 02:29:08.093868 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 28 02:29:08.094053 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:29:08.094238 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 28 02:29:08.094411 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 28 02:29:08.094605 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:29:08.094788 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 28 02:29:08.094978 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 28 02:29:08.095154 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:29:08.095340 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 28 02:29:08.095516 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 28 02:29:08.095758 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:29:08.095945 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 28 02:29:08.096116 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 28 02:29:08.096286 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:29:08.096467 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 28 02:29:08.096669 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 28 02:29:08.096843 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:29:08.097030 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 28 02:29:08.097203 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 28 02:29:08.097374 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:29:08.097555 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 28 02:29:08.097768 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 28 02:29:08.097963 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:29:08.097985 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 02:29:08.098000 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 02:29:08.098014 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 02:29:08.098028 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 02:29:08.098049 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 02:29:08.098063 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 02:29:08.098077 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 02:29:08.098090 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 02:29:08.098104 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 02:29:08.098277 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 28 02:29:08.098440 kernel: rtc_cmos 00:03: registered as rtc0 Jan 28 02:29:08.098618 kernel: rtc_cmos 00:03: setting system clock to 2026-01-28T02:29:07 UTC (1769567347) Jan 28 02:29:08.098798 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 28 02:29:08.098818 kernel: intel_pstate: CPU model not supported Jan 28 02:29:08.098832 kernel: NET: Registered PF_INET6 protocol family Jan 28 02:29:08.098846 kernel: Segment Routing with IPv6 Jan 28 02:29:08.098860 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 02:29:08.098873 kernel: NET: Registered PF_PACKET protocol family Jan 28 02:29:08.098886 kernel: Key type dns_resolver registered Jan 28 02:29:08.098908 kernel: IPI shorthand broadcast: enabled Jan 28 02:29:08.098934 kernel: sched_clock: Marking stable (1269004427, 227954192)->(1626518891, -129560272) Jan 28 02:29:08.098955 kernel: registered taskstats version 1 Jan 28 02:29:08.098969 kernel: Loading compiled-in X.509 certificates Jan 28 02:29:08.098983 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 02:29:08.098996 kernel: Key type .fscrypt registered Jan 28 02:29:08.099009 kernel: Key type fscrypt-provisioning registered Jan 28 02:29:08.099022 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 02:29:08.099036 kernel: ima: Allocated hash algorithm: sha1 Jan 28 02:29:08.099049 kernel: ima: No architecture policies found Jan 28 02:29:08.099063 kernel: clk: Disabling unused clocks Jan 28 02:29:08.099082 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 02:29:08.099095 kernel: Write protecting the kernel read-only data: 36864k Jan 28 02:29:08.099109 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 02:29:08.099122 kernel: Run /init as init process Jan 28 02:29:08.099136 kernel: with arguments: Jan 28 02:29:08.099149 kernel: /init Jan 28 02:29:08.099162 kernel: with environment: Jan 28 02:29:08.099176 kernel: HOME=/ Jan 28 02:29:08.099189 kernel: TERM=linux Jan 28 02:29:08.099210 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 02:29:08.099228 systemd[1]: Detected virtualization kvm. Jan 28 02:29:08.099242 systemd[1]: Detected architecture x86-64. Jan 28 02:29:08.099256 systemd[1]: Running in initrd. Jan 28 02:29:08.099270 systemd[1]: No hostname configured, using default hostname. Jan 28 02:29:08.099284 systemd[1]: Hostname set to . Jan 28 02:29:08.099299 systemd[1]: Initializing machine ID from VM UUID. Jan 28 02:29:08.099318 systemd[1]: Queued start job for default target initrd.target. Jan 28 02:29:08.099333 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 02:29:08.099347 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 02:29:08.099363 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 02:29:08.099377 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 02:29:08.099392 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 02:29:08.099407 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 02:29:08.099428 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 02:29:08.099443 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 02:29:08.099458 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 02:29:08.099473 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 02:29:08.099488 systemd[1]: Reached target paths.target - Path Units. Jan 28 02:29:08.099507 systemd[1]: Reached target slices.target - Slice Units. Jan 28 02:29:08.099522 systemd[1]: Reached target swap.target - Swaps. Jan 28 02:29:08.099536 systemd[1]: Reached target timers.target - Timer Units. Jan 28 02:29:08.099556 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 02:29:08.099571 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 02:29:08.099619 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 02:29:08.099635 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 02:29:08.099649 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 02:29:08.099664 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 02:29:08.099678 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 02:29:08.099693 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 02:29:08.099708 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 02:29:08.099730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 02:29:08.099750 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 02:29:08.099764 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 02:29:08.099779 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 02:29:08.099793 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 02:29:08.099808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:29:08.099882 systemd-journald[202]: Collecting audit messages is disabled. Jan 28 02:29:08.099942 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 02:29:08.099957 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 02:29:08.099972 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 02:29:08.099993 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 02:29:08.100008 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 02:29:08.100022 kernel: Bridge firewalling registered Jan 28 02:29:08.100038 systemd-journald[202]: Journal started Jan 28 02:29:08.100070 systemd-journald[202]: Runtime Journal (/run/log/journal/bae275cf4f894832a22708fea724b734) is 4.7M, max 38.0M, 33.2M free. Jan 28 02:29:08.048743 systemd-modules-load[203]: Inserted module 'overlay' Jan 28 02:29:08.141060 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 02:29:08.085003 systemd-modules-load[203]: Inserted module 'br_netfilter' Jan 28 02:29:08.143271 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 02:29:08.144361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:29:08.155820 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 02:29:08.162788 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 02:29:08.175805 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 02:29:08.180087 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 02:29:08.188795 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:29:08.190032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 02:29:08.199808 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 02:29:08.202776 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 02:29:08.204304 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 02:29:08.208813 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 02:29:08.231619 dracut-cmdline[232]: dracut-dracut-053 Jan 28 02:29:08.231639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 02:29:08.238991 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 02:29:08.262817 systemd-resolved[235]: Positive Trust Anchors: Jan 28 02:29:08.262850 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 02:29:08.262896 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 02:29:08.267751 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 28 02:29:08.269536 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 02:29:08.271131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 02:29:08.345643 kernel: SCSI subsystem initialized Jan 28 02:29:08.357623 kernel: Loading iSCSI transport class v2.0-870. Jan 28 02:29:08.371641 kernel: iscsi: registered transport (tcp) Jan 28 02:29:08.397683 kernel: iscsi: registered transport (qla4xxx) Jan 28 02:29:08.397775 kernel: QLogic iSCSI HBA Driver Jan 28 02:29:08.452133 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 02:29:08.459823 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 02:29:08.493178 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 02:29:08.493261 kernel: device-mapper: uevent: version 1.0.3 Jan 28 02:29:08.493300 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 02:29:08.543717 kernel: raid6: sse2x4 gen() 13842 MB/s Jan 28 02:29:08.561660 kernel: raid6: sse2x2 gen() 9458 MB/s Jan 28 02:29:08.580317 kernel: raid6: sse2x1 gen() 9996 MB/s Jan 28 02:29:08.580394 kernel: raid6: using algorithm sse2x4 gen() 13842 MB/s Jan 28 02:29:08.599362 kernel: raid6: .... xor() 7694 MB/s, rmw enabled Jan 28 02:29:08.599452 kernel: raid6: using ssse3x2 recovery algorithm Jan 28 02:29:08.625624 kernel: xor: automatically using best checksumming function avx Jan 28 02:29:08.820615 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 02:29:08.835450 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 02:29:08.842793 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 02:29:08.869499 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 28 02:29:08.876975 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 02:29:08.884789 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 02:29:08.907337 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 28 02:29:08.948283 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 02:29:08.954826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 02:29:09.065898 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 02:29:09.072436 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 02:29:09.104967 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 02:29:09.107054 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 02:29:09.109417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 02:29:09.110195 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 02:29:09.119770 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 02:29:09.147644 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 02:29:09.210493 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 28 02:29:09.222898 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 28 02:29:09.236311 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 02:29:09.236387 kernel: GPT:17805311 != 125829119 Jan 28 02:29:09.236406 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 02:29:09.236423 kernel: GPT:17805311 != 125829119 Jan 28 02:29:09.236440 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 02:29:09.236473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:29:09.243427 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 02:29:09.245085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 02:29:09.247228 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 02:29:09.249936 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 02:29:09.248932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 02:29:09.253819 kernel: ACPI: bus type USB registered Jan 28 02:29:09.249121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:29:09.254726 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:29:09.266648 kernel: usbcore: registered new interface driver usbfs Jan 28 02:29:09.267648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:29:09.281725 kernel: usbcore: registered new interface driver hub Jan 28 02:29:09.287663 kernel: usbcore: registered new device driver usb Jan 28 02:29:09.295603 kernel: AVX version of gcm_enc/dec engaged. Jan 28 02:29:09.295661 kernel: libata version 3.00 loaded. Jan 28 02:29:09.301600 kernel: AES CTR mode by8 optimization enabled Jan 28 02:29:09.350606 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (467) Jan 28 02:29:09.356655 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 02:29:09.356979 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 28 02:29:09.357600 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 28 02:29:09.359630 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 02:29:09.359867 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 28 02:29:09.360094 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 28 02:29:09.360314 kernel: hub 1-0:1.0: USB hub found Jan 28 02:29:09.360540 kernel: hub 1-0:1.0: 4 ports detected Jan 28 02:29:09.361863 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 28 02:29:09.362126 kernel: hub 2-0:1.0: USB hub found Jan 28 02:29:09.362371 kernel: hub 2-0:1.0: 4 ports detected Jan 28 02:29:09.362615 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (479) Jan 28 02:29:09.401173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 02:29:09.479075 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 02:29:09.479394 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 02:29:09.479450 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 02:29:09.479675 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 02:29:09.479905 kernel: scsi host0: ahci Jan 28 02:29:09.480168 kernel: scsi host1: ahci Jan 28 02:29:09.480392 kernel: scsi host2: ahci Jan 28 02:29:09.480594 kernel: scsi host3: ahci Jan 28 02:29:09.480824 kernel: scsi host4: ahci Jan 28 02:29:09.481052 kernel: scsi host5: ahci Jan 28 02:29:09.481250 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 28 02:29:09.481280 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 28 02:29:09.481300 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 28 02:29:09.481318 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 28 02:29:09.481336 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 28 02:29:09.481360 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 28 02:29:09.480239 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:29:09.488426 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 02:29:09.495791 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 02:29:09.501728 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 02:29:09.502633 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 02:29:09.519975 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 02:29:09.523522 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 02:29:09.529132 disk-uuid[564]: Primary Header is updated. Jan 28 02:29:09.529132 disk-uuid[564]: Secondary Entries is updated. Jan 28 02:29:09.529132 disk-uuid[564]: Secondary Header is updated. Jan 28 02:29:09.537605 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:29:09.547034 kernel: GPT:disk_guids don't match. Jan 28 02:29:09.547093 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 02:29:09.547123 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:29:09.553465 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 02:29:09.556748 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:29:09.607599 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 28 02:29:09.737135 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 28 02:29:09.737202 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 02:29:09.737594 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 02:29:09.743838 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 02:29:09.743900 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 02:29:09.744599 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 02:29:09.770626 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 02:29:09.777971 kernel: usbcore: registered new interface driver usbhid Jan 28 02:29:09.778018 kernel: usbhid: USB HID core driver Jan 28 02:29:09.785613 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 28 02:29:09.785669 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 28 02:29:10.554548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:29:10.555380 disk-uuid[565]: The operation has completed successfully. Jan 28 02:29:10.614869 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 02:29:10.615062 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 02:29:10.636831 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 02:29:10.641471 sh[587]: Success Jan 28 02:29:10.661633 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 28 02:29:10.730047 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 02:29:10.732702 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 02:29:10.736891 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 02:29:10.760128 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 02:29:10.760208 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:29:10.762223 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 02:29:10.765559 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 02:29:10.765612 kernel: BTRFS info (device dm-0): using free space tree Jan 28 02:29:10.777171 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 02:29:10.778722 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 02:29:10.784793 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 02:29:10.787758 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 02:29:10.804631 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 02:29:10.808620 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:29:10.808664 kernel: BTRFS info (device vda6): using free space tree Jan 28 02:29:10.814815 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 02:29:10.827342 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 02:29:10.829782 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 02:29:10.836798 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 02:29:10.844823 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 02:29:10.951117 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 02:29:10.963920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 02:29:10.995474 ignition[681]: Ignition 2.19.0 Jan 28 02:29:10.995507 ignition[681]: Stage: fetch-offline Jan 28 02:29:10.997555 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:29:10.997609 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:29:10.997821 ignition[681]: parsed url from cmdline: "" Jan 28 02:29:10.997829 ignition[681]: no config URL provided Jan 28 02:29:10.997854 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 02:29:11.002097 systemd-networkd[773]: lo: Link UP Jan 28 02:29:10.997871 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jan 28 02:29:11.002104 systemd-networkd[773]: lo: Gained carrier Jan 28 02:29:10.997881 ignition[681]: failed to fetch config: resource requires networking Jan 28 02:29:11.002219 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 02:29:10.998176 ignition[681]: Ignition finished successfully Jan 28 02:29:11.004711 systemd-networkd[773]: Enumeration completed Jan 28 02:29:11.004912 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 02:29:11.005309 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:29:11.005315 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 02:29:11.007899 systemd-networkd[773]: eth0: Link UP Jan 28 02:29:11.007905 systemd-networkd[773]: eth0: Gained carrier Jan 28 02:29:11.007918 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:29:11.008088 systemd[1]: Reached target network.target - Network. Jan 28 02:29:11.016823 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 02:29:11.029735 systemd-networkd[773]: eth0: DHCPv4 address 10.230.34.254/30, gateway 10.230.34.253 acquired from 10.230.34.253 Jan 28 02:29:11.048536 ignition[779]: Ignition 2.19.0 Jan 28 02:29:11.048555 ignition[779]: Stage: fetch Jan 28 02:29:11.048809 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:29:11.048829 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:29:11.048965 ignition[779]: parsed url from cmdline: "" Jan 28 02:29:11.048972 ignition[779]: no config URL provided Jan 28 02:29:11.048982 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 02:29:11.048998 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jan 28 02:29:11.049210 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 28 02:29:11.049371 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 28 02:29:11.049418 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 28 02:29:11.067192 ignition[779]: GET result: OK Jan 28 02:29:11.067709 ignition[779]: parsing config with SHA512: 4fdfbece324d167eaf2076dc62fcdaee0710b7b065cf4c81cf431c7d47a67656cde7a515b0dade5d8fa79d48ad485929fe70a5233e05c7cf7608305e51419e73 Jan 28 02:29:11.073317 unknown[779]: fetched base config from "system" Jan 28 02:29:11.073975 ignition[779]: fetch: fetch complete Jan 28 02:29:11.073335 unknown[779]: fetched base config from "system" Jan 28 02:29:11.073984 ignition[779]: fetch: fetch passed Jan 28 02:29:11.073345 unknown[779]: fetched user config from "openstack" Jan 28 02:29:11.074070 ignition[779]: Ignition finished successfully Jan 28 02:29:11.076016 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 02:29:11.091779 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 02:29:11.109532 ignition[787]: Ignition 2.19.0 Jan 28 02:29:11.109555 ignition[787]: Stage: kargs Jan 28 02:29:11.109818 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:29:11.109852 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:29:11.112633 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 02:29:11.110960 ignition[787]: kargs: kargs passed Jan 28 02:29:11.111040 ignition[787]: Ignition finished successfully Jan 28 02:29:11.127908 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 02:29:11.148998 ignition[794]: Ignition 2.19.0 Jan 28 02:29:11.149016 ignition[794]: Stage: disks Jan 28 02:29:11.149258 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:29:11.149278 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:29:11.150673 ignition[794]: disks: disks passed Jan 28 02:29:11.152536 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 02:29:11.150756 ignition[794]: Ignition finished successfully Jan 28 02:29:11.154227 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 02:29:11.155289 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 02:29:11.156805 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 02:29:11.158132 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 02:29:11.159633 systemd[1]: Reached target basic.target - Basic System. Jan 28 02:29:11.167854 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 02:29:11.189185 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 28 02:29:11.194205 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 02:29:11.199814 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 02:29:11.322602 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 02:29:11.323953 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 02:29:11.325423 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 02:29:11.331739 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 02:29:11.335702 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 02:29:11.336813 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 02:29:11.342894 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 28 02:29:11.344767 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 02:29:11.344818 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 02:29:11.349619 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 02:29:11.356328 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 02:29:11.364042 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Jan 28 02:29:11.364075 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 02:29:11.364094 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:29:11.364111 kernel: BTRFS info (device vda6): using free space tree Jan 28 02:29:11.370636 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 02:29:11.387234 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 02:29:11.449236 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 02:29:11.458197 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 28 02:29:11.466616 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 02:29:11.475569 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 02:29:11.584392 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 02:29:11.590730 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 02:29:11.593668 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 02:29:11.607606 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 02:29:11.634467 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 02:29:11.639615 ignition[928]: INFO : Ignition 2.19.0 Jan 28 02:29:11.639615 ignition[928]: INFO : Stage: mount Jan 28 02:29:11.639615 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 02:29:11.639615 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:29:11.643774 ignition[928]: INFO : mount: mount passed Jan 28 02:29:11.643774 ignition[928]: INFO : Ignition finished successfully Jan 28 02:29:11.644546 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 02:29:11.758319 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 02:29:12.880013 systemd-networkd[773]: eth0: Gained IPv6LL Jan 28 02:29:14.388126 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:179:88bf:24:19ff:fee6:22fe/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:88bf:24:19ff:fee6:22fe/64 assigned by NDisc. Jan 28 02:29:14.388142 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 02:29:18.524865 coreos-metadata[812]: Jan 28 02:29:18.524 WARN failed to locate config-drive, using the metadata service API instead Jan 28 02:29:18.548126 coreos-metadata[812]: Jan 28 02:29:18.548 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 02:29:18.563153 coreos-metadata[812]: Jan 28 02:29:18.563 INFO Fetch successful Jan 28 02:29:18.564297 coreos-metadata[812]: Jan 28 02:29:18.563 INFO wrote hostname srv-hg60y.gb1.brightbox.com to /sysroot/etc/hostname Jan 28 02:29:18.567376 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 28 02:29:18.567538 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 28 02:29:18.573726 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 02:29:18.598893 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 02:29:18.611210 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Jan 28 02:29:18.611270 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 02:29:18.614979 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:29:18.615054 kernel: BTRFS info (device vda6): using free space tree Jan 28 02:29:18.622624 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 02:29:18.623930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 02:29:18.656009 ignition[962]: INFO : Ignition 2.19.0 Jan 28 02:29:18.656009 ignition[962]: INFO : Stage: files Jan 28 02:29:18.658154 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 02:29:18.658154 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:29:18.658154 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 28 02:29:18.661379 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 02:29:18.661379 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 02:29:18.663480 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 02:29:18.663480 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 02:29:18.665602 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 02:29:18.663879 unknown[962]: wrote ssh authorized keys file for user: core Jan 28 02:29:18.667657 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 02:29:18.667657 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 02:29:18.854156 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 02:29:19.108532 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 02:29:19.108532 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 02:29:19.111533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 02:29:19.571315 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 02:29:21.520123 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 02:29:21.520123 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 02:29:21.523527 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 02:29:21.523527 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 02:29:21.523527 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 02:29:21.523527 ignition[962]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 02:29:21.523527 ignition[962]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 02:29:21.523527 ignition[962]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 02:29:21.523527 ignition[962]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 02:29:21.523527 ignition[962]: INFO : files: files passed Jan 28 02:29:21.523527 ignition[962]: INFO : Ignition finished successfully Jan 28 02:29:21.525404 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 02:29:21.535860 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 02:29:21.537696 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 02:29:21.556128 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 02:29:21.556301 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 02:29:21.569799 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 02:29:21.569799 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 02:29:21.572377 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 02:29:21.574286 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 02:29:21.575430 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 02:29:21.592306 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 02:29:21.628482 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 02:29:21.628724 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 02:29:21.630534 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 02:29:21.632004 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 02:29:21.633714 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 02:29:21.639769 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 02:29:21.659509 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 02:29:21.666808 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 02:29:21.681198 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 02:29:21.683093 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 02:29:21.684848 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 02:29:21.685704 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 02:29:21.685889 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 02:29:21.687775 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 02:29:21.688750 systemd[1]: Stopped target basic.target - Basic System. Jan 28 02:29:21.690285 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 02:29:21.691841 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 02:29:21.693263 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 02:29:21.694890 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 02:29:21.696462 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 02:29:21.698136 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 02:29:21.699675 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 02:29:21.701280 systemd[1]: Stopped target swap.target - Swaps. Jan 28 02:29:21.703462 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 02:29:21.703699 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 02:29:21.705553 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 02:29:21.706541 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 02:29:21.708097 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 02:29:21.708272 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 02:29:21.709775 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 02:29:21.709950 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 02:29:21.711874 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 02:29:21.712044 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 02:29:21.714013 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 02:29:21.714173 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 02:29:21.725325 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 02:29:21.728871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 02:29:21.730042 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 02:29:21.730290 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 02:29:21.732816 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 02:29:21.733040 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 02:29:21.740572 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 02:29:21.747005 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 02:29:21.757853 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 02:29:21.761688 ignition[1014]: INFO : Ignition 2.19.0 Jan 28 02:29:21.761688 ignition[1014]: INFO : Stage: umount Jan 28 02:29:21.761688 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 02:29:21.761688 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:29:21.761688 ignition[1014]: INFO : umount: umount passed Jan 28 02:29:21.761688 ignition[1014]: INFO : Ignition finished successfully Jan 28 02:29:21.757996 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 02:29:21.763005 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 02:29:21.763099 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 02:29:21.763905 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 02:29:21.763970 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 02:29:21.765747 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 02:29:21.765815 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 02:29:21.767743 systemd[1]: Stopped target network.target - Network. Jan 28 02:29:21.770569 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 02:29:21.770688 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 02:29:21.772060 systemd[1]: Stopped target paths.target - Path Units. Jan 28 02:29:21.774966 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 02:29:21.780932 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 02:29:21.783352 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 02:29:21.784880 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 02:29:21.787076 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 02:29:21.787152 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 02:29:21.787875 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 02:29:21.787939 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 02:29:21.789712 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 02:29:21.789810 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 02:29:21.791021 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 02:29:21.791099 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 02:29:21.793020 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 02:29:21.795078 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 02:29:21.797806 systemd-networkd[773]: eth0: DHCPv6 lease lost Jan 28 02:29:21.800269 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 02:29:21.801523 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 02:29:21.801745 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 02:29:21.805320 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 02:29:21.805720 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 02:29:21.808874 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 02:29:21.809035 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 02:29:21.812137 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 02:29:21.812219 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 02:29:21.814003 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 02:29:21.814080 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 02:29:21.822776 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 02:29:21.823545 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 02:29:21.823672 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 02:29:21.826405 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 02:29:21.826475 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:29:21.829898 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 02:29:21.829971 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 02:29:21.831400 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 02:29:21.831475 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 02:29:21.833191 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 02:29:21.845628 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 02:29:21.846794 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 02:29:21.847946 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 02:29:21.848186 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 02:29:21.850336 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 02:29:21.850482 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 02:29:21.852075 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 02:29:21.852139 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 02:29:21.853839 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 02:29:21.853914 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 02:29:21.856149 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 02:29:21.856222 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 02:29:21.857716 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 02:29:21.857792 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 02:29:21.865807 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 02:29:21.867696 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 02:29:21.868678 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 02:29:21.870630 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 02:29:21.870704 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 02:29:21.871530 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 02:29:21.871644 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 02:29:21.872466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 02:29:21.872534 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:29:21.876453 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 02:29:21.876639 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 02:29:21.878502 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 02:29:21.889245 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 02:29:21.899095 systemd[1]: Switching root. Jan 28 02:29:21.938908 systemd-journald[202]: Journal stopped Jan 28 02:29:23.574037 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Jan 28 02:29:23.574166 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 02:29:23.574206 kernel: SELinux: policy capability open_perms=1 Jan 28 02:29:23.574226 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 02:29:23.574257 kernel: SELinux: policy capability always_check_network=0 Jan 28 02:29:23.574281 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 02:29:23.574324 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 02:29:23.574344 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 02:29:23.574362 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 02:29:23.574407 kernel: audit: type=1403 audit(1769567362.319:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 02:29:23.574435 systemd[1]: Successfully loaded SELinux policy in 48.085ms. Jan 28 02:29:23.574459 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.415ms. Jan 28 02:29:23.574480 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 02:29:23.574507 systemd[1]: Detected virtualization kvm. Jan 28 02:29:23.574554 systemd[1]: Detected architecture x86-64. Jan 28 02:29:23.577140 systemd[1]: Detected first boot. Jan 28 02:29:23.577178 systemd[1]: Hostname set to . Jan 28 02:29:23.577199 systemd[1]: Initializing machine ID from VM UUID. Jan 28 02:29:23.577230 zram_generator::config[1060]: No configuration found. Jan 28 02:29:23.577252 systemd[1]: Populated /etc with preset unit settings. Jan 28 02:29:23.577275 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 02:29:23.577294 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 02:29:23.577342 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 02:29:23.577366 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 02:29:23.577386 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 02:29:23.577406 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 02:29:23.577425 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 02:29:23.577453 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 02:29:23.577475 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 02:29:23.577496 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 02:29:23.577550 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 02:29:23.577598 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 02:29:23.577622 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 02:29:23.577649 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 02:29:23.577670 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 02:29:23.577696 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 02:29:23.577717 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 02:29:23.577737 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 02:29:23.577757 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 02:29:23.577777 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 02:29:23.577810 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 02:29:23.577831 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 02:29:23.577858 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 02:29:23.577883 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 02:29:23.577903 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 02:29:23.577923 systemd[1]: Reached target slices.target - Slice Units. Jan 28 02:29:23.577959 systemd[1]: Reached target swap.target - Swaps. Jan 28 02:29:23.577981 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 02:29:23.578011 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 02:29:23.578032 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 02:29:23.578091 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 02:29:23.578114 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 02:29:23.578146 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 02:29:23.578168 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 02:29:23.578188 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 02:29:23.578208 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 02:29:23.578227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:29:23.578247 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 02:29:23.578266 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 02:29:23.578292 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 02:29:23.578334 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 02:29:23.578356 systemd[1]: Reached target machines.target - Containers. Jan 28 02:29:23.578376 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 02:29:23.578401 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:29:23.578421 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 02:29:23.578441 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 02:29:23.578473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 02:29:23.578498 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 02:29:23.578542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 02:29:23.579748 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 02:29:23.579784 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 02:29:23.579805 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 02:29:23.579825 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 02:29:23.579845 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 02:29:23.579877 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 02:29:23.579896 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 02:29:23.579920 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 02:29:23.579968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 02:29:23.579991 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 02:29:23.580011 kernel: fuse: init (API version 7.39) Jan 28 02:29:23.580032 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 02:29:23.580052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 02:29:23.580072 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 02:29:23.580093 systemd[1]: Stopped verity-setup.service. Jan 28 02:29:23.580113 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:29:23.580133 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 02:29:23.580164 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 02:29:23.580187 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 02:29:23.580207 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 02:29:23.580228 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 02:29:23.580295 systemd-journald[1156]: Collecting audit messages is disabled. Jan 28 02:29:23.580349 kernel: ACPI: bus type drm_connector registered Jan 28 02:29:23.580372 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 02:29:23.580392 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 02:29:23.580418 kernel: loop: module loaded Jan 28 02:29:23.580446 systemd-journald[1156]: Journal started Jan 28 02:29:23.580478 systemd-journald[1156]: Runtime Journal (/run/log/journal/bae275cf4f894832a22708fea724b734) is 4.7M, max 38.0M, 33.2M free. Jan 28 02:29:23.156092 systemd[1]: Queued start job for default target multi-user.target. Jan 28 02:29:23.182450 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 02:29:23.183184 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 02:29:23.589309 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 02:29:23.588534 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 02:29:23.590004 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 02:29:23.590209 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 02:29:23.591785 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 02:29:23.592117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 02:29:23.593413 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 02:29:23.593774 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 02:29:23.595011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 02:29:23.595225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 02:29:23.596748 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 02:29:23.597053 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 02:29:23.598388 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 02:29:23.598720 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 02:29:23.600047 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 02:29:23.601228 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 02:29:23.602733 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 02:29:23.619472 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 02:29:23.629236 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 02:29:23.635649 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 02:29:23.637463 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 02:29:23.637527 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 02:29:23.641039 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 02:29:23.649740 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 02:29:23.654244 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 02:29:23.656816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:29:23.658967 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 02:29:23.665160 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 02:29:23.667005 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 02:29:23.673778 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 02:29:23.675954 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 02:29:23.679271 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 02:29:23.685786 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 02:29:23.692795 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 02:29:23.696105 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 02:29:23.699056 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 02:29:23.700212 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 02:29:23.718065 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 02:29:23.723717 systemd-journald[1156]: Time spent on flushing to /var/log/journal/bae275cf4f894832a22708fea724b734 is 148.038ms for 1142 entries. Jan 28 02:29:23.723717 systemd-journald[1156]: System Journal (/var/log/journal/bae275cf4f894832a22708fea724b734) is 8.0M, max 584.8M, 576.8M free. Jan 28 02:29:23.908634 systemd-journald[1156]: Received client request to flush runtime journal. Jan 28 02:29:23.908721 kernel: loop0: detected capacity change from 0 to 140768 Jan 28 02:29:23.908763 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 02:29:23.908789 kernel: loop1: detected capacity change from 0 to 224512 Jan 28 02:29:23.720859 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 02:29:23.731443 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 02:29:23.815750 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 02:29:23.818031 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 02:29:23.828058 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 28 02:29:23.828102 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 28 02:29:23.843703 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 02:29:23.857622 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 02:29:23.921888 kernel: loop2: detected capacity change from 0 to 8 Jan 28 02:29:23.858875 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:29:23.912252 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 02:29:23.942535 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 02:29:23.951843 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 02:29:23.974605 kernel: loop3: detected capacity change from 0 to 142488 Jan 28 02:29:23.996696 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 28 02:29:24.007049 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 02:29:24.017760 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 02:29:24.058365 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 28 02:29:24.059639 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 28 02:29:24.063631 kernel: loop4: detected capacity change from 0 to 140768 Jan 28 02:29:24.069451 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 02:29:24.113603 kernel: loop5: detected capacity change from 0 to 224512 Jan 28 02:29:24.139696 kernel: loop6: detected capacity change from 0 to 8 Jan 28 02:29:24.143621 kernel: loop7: detected capacity change from 0 to 142488 Jan 28 02:29:24.169911 (sd-merge)[1217]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 28 02:29:24.175612 (sd-merge)[1217]: Merged extensions into '/usr'. Jan 28 02:29:24.186710 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 02:29:24.186842 systemd[1]: Reloading... Jan 28 02:29:24.342657 zram_generator::config[1241]: No configuration found. Jan 28 02:29:24.535970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 02:29:24.578943 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 02:29:24.605302 systemd[1]: Reloading finished in 415 ms. Jan 28 02:29:24.666024 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 02:29:24.667434 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 02:29:24.679869 systemd[1]: Starting ensure-sysext.service... Jan 28 02:29:24.691330 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 02:29:24.708750 systemd[1]: Reloading requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Jan 28 02:29:24.708775 systemd[1]: Reloading... Jan 28 02:29:24.740375 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 02:29:24.741470 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 02:29:24.743245 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 02:29:24.745861 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 28 02:29:24.746110 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 28 02:29:24.766069 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 02:29:24.766283 systemd-tmpfiles[1301]: Skipping /boot Jan 28 02:29:24.806507 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 02:29:24.808792 systemd-tmpfiles[1301]: Skipping /boot Jan 28 02:29:24.824611 zram_generator::config[1337]: No configuration found. Jan 28 02:29:24.989273 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 02:29:25.063420 systemd[1]: Reloading finished in 354 ms. Jan 28 02:29:25.086706 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 02:29:25.094147 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 02:29:25.114835 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 02:29:25.120769 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 02:29:25.131791 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 02:29:25.137946 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 02:29:25.147115 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 02:29:25.152623 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 02:29:25.162769 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:29:25.163069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:29:25.170391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 02:29:25.179652 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 02:29:25.187026 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 02:29:25.187944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:29:25.188097 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:29:25.192286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:29:25.192898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:29:25.193238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:29:25.203696 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 02:29:25.205161 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:29:25.215945 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 02:29:25.219173 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 02:29:25.221308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 02:29:25.221899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 02:29:25.239625 augenrules[1411]: No rules Jan 28 02:29:25.235440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:29:25.235854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:29:25.238843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 02:29:25.240399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:29:25.245800 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 02:29:25.246994 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:29:25.249776 systemd[1]: Finished ensure-sysext.service. Jan 28 02:29:25.250953 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 02:29:25.253228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 02:29:25.253436 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 02:29:25.255052 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 02:29:25.255239 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 02:29:25.265282 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 02:29:25.265379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 02:29:25.274899 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 02:29:25.288754 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 02:29:25.289001 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 02:29:25.304347 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 02:29:25.310453 systemd-udevd[1399]: Using default interface naming scheme 'v255'. Jan 28 02:29:25.324693 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 02:29:25.328857 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 02:29:25.335400 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 02:29:25.344841 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 02:29:25.355910 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 02:29:25.474682 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 02:29:25.475744 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 02:29:25.497964 systemd-resolved[1397]: Positive Trust Anchors: Jan 28 02:29:25.498413 systemd-resolved[1397]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 02:29:25.498476 systemd-resolved[1397]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 02:29:25.512866 systemd-resolved[1397]: Using system hostname 'srv-hg60y.gb1.brightbox.com'. Jan 28 02:29:25.516254 systemd-networkd[1438]: lo: Link UP Jan 28 02:29:25.516677 systemd-networkd[1438]: lo: Gained carrier Jan 28 02:29:25.517591 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 02:29:25.518691 systemd-networkd[1438]: Enumeration completed Jan 28 02:29:25.518725 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 02:29:25.519811 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 02:29:25.521168 systemd[1]: Reached target network.target - Network. Jan 28 02:29:25.528787 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 02:29:25.576518 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 02:29:25.595619 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1444) Jan 28 02:29:25.676363 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 02:29:25.683955 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 02:29:25.695996 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:29:25.696236 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 02:29:25.700366 systemd-networkd[1438]: eth0: Link UP Jan 28 02:29:25.700534 systemd-networkd[1438]: eth0: Gained carrier Jan 28 02:29:25.700662 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:29:25.703615 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 02:29:25.708636 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 28 02:29:25.714815 kernel: ACPI: button: Power Button [PWRF] Jan 28 02:29:25.715666 systemd-networkd[1438]: eth0: DHCPv4 address 10.230.34.254/30, gateway 10.230.34.253 acquired from 10.230.34.253 Jan 28 02:29:25.720689 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jan 28 02:29:25.725984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 02:29:25.775613 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 28 02:29:25.790625 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 02:29:25.793752 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 02:29:25.794020 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 02:29:25.879961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:29:26.066722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:29:26.071740 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 02:29:26.077831 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 02:29:26.106611 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 02:29:26.138098 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 02:29:26.139318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 02:29:26.140142 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 02:29:26.141053 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 02:29:26.142086 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 02:29:26.143205 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 02:29:26.144115 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 02:29:26.144929 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 02:29:26.145722 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 02:29:26.145775 systemd[1]: Reached target paths.target - Path Units. Jan 28 02:29:26.146421 systemd[1]: Reached target timers.target - Timer Units. Jan 28 02:29:26.148003 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 02:29:26.150625 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 02:29:26.156713 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 02:29:26.159183 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 02:29:26.160598 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 02:29:26.161435 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 02:29:26.162172 systemd[1]: Reached target basic.target - Basic System. Jan 28 02:29:26.162888 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 02:29:26.162942 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 02:29:26.170748 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 02:29:26.175784 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 02:29:26.178724 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 02:29:26.184812 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 02:29:26.188657 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 02:29:26.193791 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 02:29:26.195262 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 02:29:26.198790 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 02:29:26.201370 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 02:29:26.203827 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 02:29:26.209281 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 02:29:26.219878 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 02:29:26.222378 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 02:29:26.223061 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 02:29:26.225820 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 02:29:26.229825 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 02:29:26.243067 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 02:29:26.245074 jq[1481]: false Jan 28 02:29:26.259005 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 02:29:26.259338 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 02:29:26.265610 jq[1490]: true Jan 28 02:29:26.276799 extend-filesystems[1482]: Found loop4 Jan 28 02:29:26.279684 extend-filesystems[1482]: Found loop5 Jan 28 02:29:26.279684 extend-filesystems[1482]: Found loop6 Jan 28 02:29:26.279684 extend-filesystems[1482]: Found loop7 Jan 28 02:29:26.279684 extend-filesystems[1482]: Found vda Jan 28 02:29:26.279684 extend-filesystems[1482]: Found vda1 Jan 28 02:29:26.279684 extend-filesystems[1482]: Found vda2 Jan 28 02:29:26.279684 extend-filesystems[1482]: Found vda3 Jan 28 02:29:26.279684 extend-filesystems[1482]: Found usr Jan 28 02:29:26.279684 extend-filesystems[1482]: Found vda4 Jan 28 02:29:26.279684 extend-filesystems[1482]: Found vda6 Jan 28 02:29:26.279684 extend-filesystems[1482]: Found vda7 Jan 28 02:29:26.279684 extend-filesystems[1482]: Found vda9 Jan 28 02:29:26.279684 extend-filesystems[1482]: Checking size of /dev/vda9 Jan 28 02:29:26.346966 dbus-daemon[1480]: [system] SELinux support is enabled Jan 28 02:29:26.313083 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 02:29:26.361340 jq[1502]: true Jan 28 02:29:26.354190 dbus-daemon[1480]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1438 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 28 02:29:26.334114 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 02:29:26.335178 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 02:29:26.344296 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 02:29:26.362309 tar[1494]: linux-amd64/LICENSE Jan 28 02:29:26.362309 tar[1494]: linux-amd64/helm Jan 28 02:29:26.346736 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 02:29:26.374688 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 02:29:26.348516 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 02:29:26.356418 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 02:29:26.357704 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 02:29:26.357742 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 02:29:26.358598 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 02:29:26.358632 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 02:29:26.401038 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 28 02:29:26.413018 extend-filesystems[1482]: Resized partition /dev/vda9 Jan 28 02:29:26.413850 extend-filesystems[1522]: resize2fs 1.47.1 (20-May-2024) Jan 28 02:29:26.435917 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 28 02:29:26.436016 update_engine[1489]: I20260128 02:29:26.433430 1489 main.cc:92] Flatcar Update Engine starting Jan 28 02:29:26.464364 systemd[1]: Started update-engine.service - Update Engine. Jan 28 02:29:26.471803 update_engine[1489]: I20260128 02:29:26.463960 1489 update_check_scheduler.cc:74] Next update check in 3m3s Jan 28 02:29:26.479928 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1450) Jan 28 02:29:26.478423 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 02:29:26.602326 systemd-logind[1488]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 02:29:26.603686 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 02:29:26.605954 systemd-logind[1488]: New seat seat0. Jan 28 02:29:26.608558 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 02:29:26.638266 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Jan 28 02:29:26.639324 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 02:29:26.651093 systemd[1]: Starting sshkeys.service... Jan 28 02:29:26.664987 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 28 02:29:26.674108 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 28 02:29:26.771954 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 28 02:29:26.771874 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 28 02:29:26.772101 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 28 02:29:26.778905 dbus-daemon[1480]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1523 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 28 02:29:26.821527 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 02:29:26.821527 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 28 02:29:26.821527 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 28 02:29:26.778657 locksmithd[1529]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 02:29:26.827543 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Jan 28 02:29:26.790902 systemd[1]: Starting polkit.service - Authorization Manager... Jan 28 02:29:26.805826 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 02:29:26.806102 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 02:29:26.838288 polkitd[1554]: Started polkitd version 121 Jan 28 02:29:26.868149 polkitd[1554]: Loading rules from directory /etc/polkit-1/rules.d Jan 28 02:29:26.868254 polkitd[1554]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 28 02:29:26.874972 polkitd[1554]: Finished loading, compiling and executing 2 rules Jan 28 02:29:26.875596 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 28 02:29:26.875942 polkitd[1554]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 28 02:29:26.876794 systemd[1]: Started polkit.service - Authorization Manager. Jan 28 02:29:26.929254 systemd-hostnamed[1523]: Hostname set to (static) Jan 28 02:29:26.986159 containerd[1508]: time="2026-01-28T02:29:26.984067139Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 02:29:27.061298 containerd[1508]: time="2026-01-28T02:29:27.061232025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 02:29:27.067619 containerd[1508]: time="2026-01-28T02:29:27.066842133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 02:29:27.067619 containerd[1508]: time="2026-01-28T02:29:27.066884366Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 02:29:27.067619 containerd[1508]: time="2026-01-28T02:29:27.066919643Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 02:29:27.067619 containerd[1508]: time="2026-01-28T02:29:27.067204857Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 02:29:27.067619 containerd[1508]: time="2026-01-28T02:29:27.067239665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 02:29:27.067619 containerd[1508]: time="2026-01-28T02:29:27.067348602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 02:29:27.067619 containerd[1508]: time="2026-01-28T02:29:27.067371240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 02:29:27.067989 containerd[1508]: time="2026-01-28T02:29:27.067958494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 02:29:27.068091 containerd[1508]: time="2026-01-28T02:29:27.068067537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 02:29:27.068184 containerd[1508]: time="2026-01-28T02:29:27.068160159Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 02:29:27.068596 containerd[1508]: time="2026-01-28T02:29:27.068298519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 02:29:27.068596 containerd[1508]: time="2026-01-28T02:29:27.068448025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 02:29:27.070303 containerd[1508]: time="2026-01-28T02:29:27.069859226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 02:29:27.070303 containerd[1508]: time="2026-01-28T02:29:27.070005636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 02:29:27.070303 containerd[1508]: time="2026-01-28T02:29:27.070030538Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 02:29:27.070303 containerd[1508]: time="2026-01-28T02:29:27.070168246Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 02:29:27.070303 containerd[1508]: time="2026-01-28T02:29:27.070258151Z" level=info msg="metadata content store policy set" policy=shared Jan 28 02:29:27.076068 containerd[1508]: time="2026-01-28T02:29:27.075603318Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 02:29:27.076068 containerd[1508]: time="2026-01-28T02:29:27.075682772Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 02:29:27.076068 containerd[1508]: time="2026-01-28T02:29:27.075710219Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 02:29:27.076068 containerd[1508]: time="2026-01-28T02:29:27.075781175Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 02:29:27.076068 containerd[1508]: time="2026-01-28T02:29:27.075816738Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 02:29:27.076068 containerd[1508]: time="2026-01-28T02:29:27.075989291Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078021259Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078205540Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078230858Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078249806Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078269484Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078297681Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078323366Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078345030Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078366272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078386959Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078406001Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078436579Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078481440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.078945 containerd[1508]: time="2026-01-28T02:29:27.078505092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078524888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078544629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078565656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078616489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078640416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078664239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078684040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078706372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078726564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078745079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078764604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078786218Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078828880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078851320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.079406 containerd[1508]: time="2026-01-28T02:29:27.078867959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 02:29:27.081980 containerd[1508]: time="2026-01-28T02:29:27.079952554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 02:29:27.081980 containerd[1508]: time="2026-01-28T02:29:27.080727480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 02:29:27.081980 containerd[1508]: time="2026-01-28T02:29:27.080751650Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 02:29:27.081980 containerd[1508]: time="2026-01-28T02:29:27.080771967Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 02:29:27.081980 containerd[1508]: time="2026-01-28T02:29:27.080788282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.081980 containerd[1508]: time="2026-01-28T02:29:27.080806962Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 02:29:27.081980 containerd[1508]: time="2026-01-28T02:29:27.080828515Z" level=info msg="NRI interface is disabled by configuration." Jan 28 02:29:27.081980 containerd[1508]: time="2026-01-28T02:29:27.080846320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 02:29:27.082278 containerd[1508]: time="2026-01-28T02:29:27.081233541Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 02:29:27.082278 containerd[1508]: time="2026-01-28T02:29:27.081314878Z" level=info msg="Connect containerd service" Jan 28 02:29:27.082278 containerd[1508]: time="2026-01-28T02:29:27.081364175Z" level=info msg="using legacy CRI server" Jan 28 02:29:27.082278 containerd[1508]: time="2026-01-28T02:29:27.081379977Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 02:29:27.082278 containerd[1508]: time="2026-01-28T02:29:27.081527916Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 02:29:27.086794 containerd[1508]: time="2026-01-28T02:29:27.086744253Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 02:29:27.087154 containerd[1508]: time="2026-01-28T02:29:27.087048116Z" level=info msg="Start subscribing containerd event" Jan 28 02:29:27.087431 containerd[1508]: time="2026-01-28T02:29:27.087280605Z" level=info msg="Start recovering state" Jan 28 02:29:27.088039 containerd[1508]: time="2026-01-28T02:29:27.087520498Z" level=info msg="Start event monitor" Jan 28 02:29:27.088039 containerd[1508]: time="2026-01-28T02:29:27.087570831Z" level=info msg="Start snapshots syncer" Jan 28 02:29:27.088039 containerd[1508]: time="2026-01-28T02:29:27.087604568Z" level=info msg="Start cni network conf syncer for default" Jan 28 02:29:27.088039 containerd[1508]: time="2026-01-28T02:29:27.087619370Z" level=info msg="Start streaming server" Jan 28 02:29:27.088039 containerd[1508]: time="2026-01-28T02:29:27.087523322Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 02:29:27.088039 containerd[1508]: time="2026-01-28T02:29:27.087960003Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 02:29:27.088484 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 02:29:27.091182 containerd[1508]: time="2026-01-28T02:29:27.091147954Z" level=info msg="containerd successfully booted in 0.110672s" Jan 28 02:29:27.215955 systemd-networkd[1438]: eth0: Gained IPv6LL Jan 28 02:29:27.217837 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jan 28 02:29:27.223691 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 02:29:27.228090 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 02:29:27.240960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:29:27.251531 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 02:29:27.315224 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 02:29:27.375644 tar[1494]: linux-amd64/README.md Jan 28 02:29:27.390568 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 02:29:27.846791 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 02:29:27.881513 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 02:29:27.890715 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 02:29:27.901829 systemd[1]: Started sshd@0-10.230.34.254:22-68.220.241.50:60220.service - OpenSSH per-connection server daemon (68.220.241.50:60220). Jan 28 02:29:27.905007 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 02:29:27.906709 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 02:29:27.919040 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 02:29:27.943155 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 02:29:27.955701 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 02:29:27.959149 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 02:29:27.960279 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 02:29:28.397479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:29:28.403853 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:29:28.721784 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jan 28 02:29:28.737907 systemd-networkd[1438]: eth0: Ignoring DHCPv6 address 2a02:1348:179:88bf:24:19ff:fee6:22fe/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:88bf:24:19ff:fee6:22fe/64 assigned by NDisc. Jan 28 02:29:28.737918 systemd-networkd[1438]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 02:29:28.801613 sshd[1593]: Accepted publickey for core from 68.220.241.50 port 60220 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:29:28.803965 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:29:28.828774 systemd-logind[1488]: New session 1 of user core. Jan 28 02:29:28.832313 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 02:29:28.840021 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 02:29:28.865643 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 02:29:28.875173 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 02:29:28.888502 (systemd)[1616]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 02:29:29.035871 systemd[1616]: Queued start job for default target default.target. Jan 28 02:29:29.050132 systemd[1616]: Created slice app.slice - User Application Slice. Jan 28 02:29:29.050519 systemd[1616]: Reached target paths.target - Paths. Jan 28 02:29:29.050714 systemd[1616]: Reached target timers.target - Timers. Jan 28 02:29:29.055753 systemd[1616]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 02:29:29.074224 systemd[1616]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 02:29:29.074671 systemd[1616]: Reached target sockets.target - Sockets. Jan 28 02:29:29.074836 systemd[1616]: Reached target basic.target - Basic System. Jan 28 02:29:29.074996 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 02:29:29.075809 systemd[1616]: Reached target default.target - Main User Target. Jan 28 02:29:29.075882 systemd[1616]: Startup finished in 176ms. Jan 28 02:29:29.084092 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 02:29:29.103672 kubelet[1607]: E0128 02:29:29.103556 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:29:29.106099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:29:29.106399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:29:29.107592 systemd[1]: kubelet.service: Consumed 1.100s CPU time. Jan 28 02:29:29.521042 systemd[1]: Started sshd@1-10.230.34.254:22-68.220.241.50:60224.service - OpenSSH per-connection server daemon (68.220.241.50:60224). Jan 28 02:29:30.103661 sshd[1628]: Accepted publickey for core from 68.220.241.50 port 60224 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:29:30.106374 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:29:30.114645 systemd-logind[1488]: New session 2 of user core. Jan 28 02:29:30.130223 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 02:29:30.522777 sshd[1628]: pam_unix(sshd:session): session closed for user core Jan 28 02:29:30.527148 systemd[1]: sshd@1-10.230.34.254:22-68.220.241.50:60224.service: Deactivated successfully. Jan 28 02:29:30.529501 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 02:29:30.531893 systemd-logind[1488]: Session 2 logged out. Waiting for processes to exit. Jan 28 02:29:30.533296 systemd-logind[1488]: Removed session 2. Jan 28 02:29:30.608898 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jan 28 02:29:30.642031 systemd[1]: Started sshd@2-10.230.34.254:22-68.220.241.50:60230.service - OpenSSH per-connection server daemon (68.220.241.50:60230). Jan 28 02:29:31.218981 sshd[1637]: Accepted publickey for core from 68.220.241.50 port 60230 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:29:31.221061 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:29:31.227613 systemd-logind[1488]: New session 3 of user core. Jan 28 02:29:31.238169 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 02:29:31.634658 sshd[1637]: pam_unix(sshd:session): session closed for user core Jan 28 02:29:31.639696 systemd[1]: sshd@2-10.230.34.254:22-68.220.241.50:60230.service: Deactivated successfully. Jan 28 02:29:31.642173 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 02:29:31.643296 systemd-logind[1488]: Session 3 logged out. Waiting for processes to exit. Jan 28 02:29:31.644898 systemd-logind[1488]: Removed session 3. Jan 28 02:29:33.022291 login[1601]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 28 02:29:33.025095 login[1600]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 28 02:29:33.029949 systemd-logind[1488]: New session 4 of user core. Jan 28 02:29:33.041885 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 02:29:33.045760 systemd-logind[1488]: New session 5 of user core. Jan 28 02:29:33.051860 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 02:29:33.656354 coreos-metadata[1479]: Jan 28 02:29:33.656 WARN failed to locate config-drive, using the metadata service API instead Jan 28 02:29:33.685422 coreos-metadata[1479]: Jan 28 02:29:33.685 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 28 02:29:33.692213 coreos-metadata[1479]: Jan 28 02:29:33.692 INFO Fetch failed with 404: resource not found Jan 28 02:29:33.692213 coreos-metadata[1479]: Jan 28 02:29:33.692 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 02:29:33.693011 coreos-metadata[1479]: Jan 28 02:29:33.692 INFO Fetch successful Jan 28 02:29:33.693199 coreos-metadata[1479]: Jan 28 02:29:33.693 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 28 02:29:33.703697 coreos-metadata[1479]: Jan 28 02:29:33.703 INFO Fetch successful Jan 28 02:29:33.703975 coreos-metadata[1479]: Jan 28 02:29:33.703 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 28 02:29:33.730559 coreos-metadata[1479]: Jan 28 02:29:33.730 INFO Fetch successful Jan 28 02:29:33.730870 coreos-metadata[1479]: Jan 28 02:29:33.730 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 28 02:29:33.757017 coreos-metadata[1542]: Jan 28 02:29:33.756 WARN failed to locate config-drive, using the metadata service API instead Jan 28 02:29:33.761819 coreos-metadata[1479]: Jan 28 02:29:33.761 INFO Fetch successful Jan 28 02:29:33.762021 coreos-metadata[1479]: Jan 28 02:29:33.761 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 28 02:29:33.780659 coreos-metadata[1542]: Jan 28 02:29:33.780 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 28 02:29:33.783483 coreos-metadata[1479]: Jan 28 02:29:33.783 INFO Fetch successful Jan 28 02:29:33.803826 coreos-metadata[1542]: Jan 28 02:29:33.803 INFO Fetch successful Jan 28 02:29:33.804235 coreos-metadata[1542]: Jan 28 02:29:33.804 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 28 02:29:33.815258 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 02:29:33.816261 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 02:29:33.835633 coreos-metadata[1542]: Jan 28 02:29:33.835 INFO Fetch successful Jan 28 02:29:33.837928 unknown[1542]: wrote ssh authorized keys file for user: core Jan 28 02:29:33.877470 update-ssh-keys[1678]: Updated "/home/core/.ssh/authorized_keys" Jan 28 02:29:33.878809 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 28 02:29:33.881501 systemd[1]: Finished sshkeys.service. Jan 28 02:29:33.885098 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 02:29:33.885656 systemd[1]: Startup finished in 1.446s (kernel) + 14.569s (initrd) + 11.613s (userspace) = 27.628s. Jan 28 02:29:39.230311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 02:29:39.241901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:29:39.418475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:29:39.427064 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:29:39.596228 kubelet[1690]: E0128 02:29:39.596033 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:29:39.600344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:29:39.600629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:29:41.738848 systemd[1]: Started sshd@3-10.230.34.254:22-68.220.241.50:45528.service - OpenSSH per-connection server daemon (68.220.241.50:45528). Jan 28 02:29:42.325561 sshd[1698]: Accepted publickey for core from 68.220.241.50 port 45528 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:29:42.327860 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:29:42.334146 systemd-logind[1488]: New session 6 of user core. Jan 28 02:29:42.345949 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 02:29:42.742144 sshd[1698]: pam_unix(sshd:session): session closed for user core Jan 28 02:29:42.746484 systemd[1]: sshd@3-10.230.34.254:22-68.220.241.50:45528.service: Deactivated successfully. Jan 28 02:29:42.747149 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. Jan 28 02:29:42.749055 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 02:29:42.751007 systemd-logind[1488]: Removed session 6. Jan 28 02:29:42.858051 systemd[1]: Started sshd@4-10.230.34.254:22-68.220.241.50:41922.service - OpenSSH per-connection server daemon (68.220.241.50:41922). Jan 28 02:29:43.438789 sshd[1705]: Accepted publickey for core from 68.220.241.50 port 41922 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:29:43.440934 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:29:43.447199 systemd-logind[1488]: New session 7 of user core. Jan 28 02:29:43.455833 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 02:29:43.850316 sshd[1705]: pam_unix(sshd:session): session closed for user core Jan 28 02:29:43.854761 systemd[1]: sshd@4-10.230.34.254:22-68.220.241.50:41922.service: Deactivated successfully. Jan 28 02:29:43.854848 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. Jan 28 02:29:43.856872 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 02:29:43.859029 systemd-logind[1488]: Removed session 7. Jan 28 02:29:43.955674 systemd[1]: Started sshd@5-10.230.34.254:22-68.220.241.50:41938.service - OpenSSH per-connection server daemon (68.220.241.50:41938). Jan 28 02:29:44.546523 sshd[1712]: Accepted publickey for core from 68.220.241.50 port 41938 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:29:44.549057 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:29:44.555985 systemd-logind[1488]: New session 8 of user core. Jan 28 02:29:44.567379 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 02:29:44.963819 sshd[1712]: pam_unix(sshd:session): session closed for user core Jan 28 02:29:44.968676 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. Jan 28 02:29:44.969854 systemd[1]: sshd@5-10.230.34.254:22-68.220.241.50:41938.service: Deactivated successfully. Jan 28 02:29:44.972743 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 02:29:44.974919 systemd-logind[1488]: Removed session 8. Jan 28 02:29:45.074140 systemd[1]: Started sshd@6-10.230.34.254:22-68.220.241.50:41948.service - OpenSSH per-connection server daemon (68.220.241.50:41948). Jan 28 02:29:45.642706 sshd[1719]: Accepted publickey for core from 68.220.241.50 port 41948 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:29:45.645520 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:29:45.656388 systemd-logind[1488]: New session 9 of user core. Jan 28 02:29:45.667013 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 02:29:45.991514 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 02:29:45.992070 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:29:46.011376 sudo[1722]: pam_unix(sudo:session): session closed for user root Jan 28 02:29:46.109139 sshd[1719]: pam_unix(sshd:session): session closed for user core Jan 28 02:29:46.115638 systemd[1]: sshd@6-10.230.34.254:22-68.220.241.50:41948.service: Deactivated successfully. Jan 28 02:29:46.118501 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 02:29:46.119679 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. Jan 28 02:29:46.121491 systemd-logind[1488]: Removed session 9. Jan 28 02:29:46.220055 systemd[1]: Started sshd@7-10.230.34.254:22-68.220.241.50:41962.service - OpenSSH per-connection server daemon (68.220.241.50:41962). Jan 28 02:29:46.824998 sshd[1727]: Accepted publickey for core from 68.220.241.50 port 41962 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:29:46.827409 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:29:46.834653 systemd-logind[1488]: New session 10 of user core. Jan 28 02:29:46.843827 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 02:29:47.156803 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 02:29:47.157329 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:29:47.163808 sudo[1731]: pam_unix(sudo:session): session closed for user root Jan 28 02:29:47.172883 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 02:29:47.173364 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:29:47.198070 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 02:29:47.201476 auditctl[1734]: No rules Jan 28 02:29:47.203390 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 02:29:47.203878 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 02:29:47.211075 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 02:29:47.252719 augenrules[1752]: No rules Jan 28 02:29:47.254716 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 02:29:47.257932 sudo[1730]: pam_unix(sudo:session): session closed for user root Jan 28 02:29:47.355422 sshd[1727]: pam_unix(sshd:session): session closed for user core Jan 28 02:29:47.360132 systemd-logind[1488]: Session 10 logged out. Waiting for processes to exit. Jan 28 02:29:47.362180 systemd[1]: sshd@7-10.230.34.254:22-68.220.241.50:41962.service: Deactivated successfully. Jan 28 02:29:47.364528 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 02:29:47.365720 systemd-logind[1488]: Removed session 10. Jan 28 02:29:47.468122 systemd[1]: Started sshd@8-10.230.34.254:22-68.220.241.50:41966.service - OpenSSH per-connection server daemon (68.220.241.50:41966). Jan 28 02:29:48.039422 sshd[1760]: Accepted publickey for core from 68.220.241.50 port 41966 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:29:48.041810 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:29:48.049176 systemd-logind[1488]: New session 11 of user core. Jan 28 02:29:48.056851 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 02:29:48.360660 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 02:29:48.361312 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:29:48.843298 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 02:29:48.844010 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 02:29:49.295506 dockerd[1778]: time="2026-01-28T02:29:49.294667569Z" level=info msg="Starting up" Jan 28 02:29:49.430822 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2346926128-merged.mount: Deactivated successfully. Jan 28 02:29:49.444219 systemd[1]: var-lib-docker-metacopy\x2dcheck2910460369-merged.mount: Deactivated successfully. Jan 28 02:29:49.467545 dockerd[1778]: time="2026-01-28T02:29:49.467473106Z" level=info msg="Loading containers: start." Jan 28 02:29:49.620625 kernel: Initializing XFRM netlink socket Jan 28 02:29:49.662571 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jan 28 02:29:49.663865 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 02:29:49.679744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:29:49.958560 systemd-networkd[1438]: docker0: Link UP Jan 28 02:29:50.006606 dockerd[1778]: time="2026-01-28T02:29:50.005337649Z" level=info msg="Loading containers: done." Jan 28 02:29:50.039041 dockerd[1778]: time="2026-01-28T02:29:50.038129927Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 02:29:50.039041 dockerd[1778]: time="2026-01-28T02:29:50.038410936Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 02:29:50.039041 dockerd[1778]: time="2026-01-28T02:29:50.038629926Z" level=info msg="Daemon has completed initialization" Jan 28 02:29:50.044316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:29:50.059019 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:29:50.114047 dockerd[1778]: time="2026-01-28T02:29:50.113849794Z" level=info msg="API listen on /run/docker.sock" Jan 28 02:29:50.115227 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 02:29:50.148921 kubelet[1895]: E0128 02:29:50.148850 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:29:50.151907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:29:50.152175 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:29:51.388848 systemd-timesyncd[1425]: Contacted time server [2a00:da00:f461:7a00::1]:123 (2.flatcar.pool.ntp.org). Jan 28 02:29:51.388955 systemd-timesyncd[1425]: Initial clock synchronization to Wed 2026-01-28 02:29:51.388472 UTC. Jan 28 02:29:51.389061 systemd-resolved[1397]: Clock change detected. Flushing caches. Jan 28 02:29:52.012114 containerd[1508]: time="2026-01-28T02:29:52.012032999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 02:29:52.822630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113018059.mount: Deactivated successfully. Jan 28 02:29:58.968388 containerd[1508]: time="2026-01-28T02:29:58.967551058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:29:58.971489 containerd[1508]: time="2026-01-28T02:29:58.969175742Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 28 02:29:58.972730 containerd[1508]: time="2026-01-28T02:29:58.972696421Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:29:58.979812 containerd[1508]: time="2026-01-28T02:29:58.979759609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:29:58.981664 containerd[1508]: time="2026-01-28T02:29:58.981623230Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 6.969494196s" Jan 28 02:29:58.981854 containerd[1508]: time="2026-01-28T02:29:58.981823075Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 02:29:58.984120 containerd[1508]: time="2026-01-28T02:29:58.984053825Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 02:29:59.486139 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 28 02:30:00.957522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 02:30:00.967098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:30:01.209575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:30:01.213196 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:30:01.414552 kubelet[2008]: E0128 02:30:01.414397 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:30:01.418732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:30:01.419016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:30:02.488375 containerd[1508]: time="2026-01-28T02:30:02.487547217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:02.490286 containerd[1508]: time="2026-01-28T02:30:02.490204303Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 28 02:30:02.491171 containerd[1508]: time="2026-01-28T02:30:02.491083411Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:02.496180 containerd[1508]: time="2026-01-28T02:30:02.495618388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:02.497505 containerd[1508]: time="2026-01-28T02:30:02.497084438Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.512662018s" Jan 28 02:30:02.497505 containerd[1508]: time="2026-01-28T02:30:02.497178939Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 02:30:02.499389 containerd[1508]: time="2026-01-28T02:30:02.499343383Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 02:30:04.328958 containerd[1508]: time="2026-01-28T02:30:04.328845770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:04.330942 containerd[1508]: time="2026-01-28T02:30:04.330625022Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 28 02:30:04.332262 containerd[1508]: time="2026-01-28T02:30:04.331614712Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:04.335610 containerd[1508]: time="2026-01-28T02:30:04.335570725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:04.337286 containerd[1508]: time="2026-01-28T02:30:04.337243241Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.837859255s" Jan 28 02:30:04.337379 containerd[1508]: time="2026-01-28T02:30:04.337299135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 02:30:04.339826 containerd[1508]: time="2026-01-28T02:30:04.339745818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 02:30:06.093250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207533216.mount: Deactivated successfully. Jan 28 02:30:07.643105 containerd[1508]: time="2026-01-28T02:30:07.641836328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:07.644277 containerd[1508]: time="2026-01-28T02:30:07.644231734Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 28 02:30:07.644846 containerd[1508]: time="2026-01-28T02:30:07.644802833Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:07.648614 containerd[1508]: time="2026-01-28T02:30:07.647617591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:07.649112 containerd[1508]: time="2026-01-28T02:30:07.649072025Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.309125778s" Jan 28 02:30:07.649309 containerd[1508]: time="2026-01-28T02:30:07.649268381Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 02:30:07.651093 containerd[1508]: time="2026-01-28T02:30:07.650935021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 02:30:08.317037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543542071.mount: Deactivated successfully. Jan 28 02:30:11.458312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 02:30:11.472019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:30:11.793178 containerd[1508]: time="2026-01-28T02:30:11.791948398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:11.799516 containerd[1508]: time="2026-01-28T02:30:11.799445020Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 28 02:30:11.803169 containerd[1508]: time="2026-01-28T02:30:11.802630123Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:11.806357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:30:11.811039 containerd[1508]: time="2026-01-28T02:30:11.810978303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:11.814640 containerd[1508]: time="2026-01-28T02:30:11.814540195Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.163209905s" Jan 28 02:30:11.814640 containerd[1508]: time="2026-01-28T02:30:11.814631409Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 02:30:11.816763 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:30:11.818537 containerd[1508]: time="2026-01-28T02:30:11.818483167Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 02:30:11.921405 kubelet[2091]: E0128 02:30:11.920992 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:30:11.925280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:30:11.925557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:30:11.986213 update_engine[1489]: I20260128 02:30:11.985494 1489 update_attempter.cc:509] Updating boot flags... Jan 28 02:30:12.052191 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2106) Jan 28 02:30:12.130180 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2110) Jan 28 02:30:12.444688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967743747.mount: Deactivated successfully. Jan 28 02:30:12.455294 containerd[1508]: time="2026-01-28T02:30:12.455193252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:12.457181 containerd[1508]: time="2026-01-28T02:30:12.457003075Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 28 02:30:12.458202 containerd[1508]: time="2026-01-28T02:30:12.458124915Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:12.462118 containerd[1508]: time="2026-01-28T02:30:12.462080894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:12.463763 containerd[1508]: time="2026-01-28T02:30:12.463502405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 644.966846ms" Jan 28 02:30:12.463763 containerd[1508]: time="2026-01-28T02:30:12.463558394Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 02:30:12.464509 containerd[1508]: time="2026-01-28T02:30:12.464359403Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 02:30:13.396839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951936986.mount: Deactivated successfully. Jan 28 02:30:18.163842 containerd[1508]: time="2026-01-28T02:30:18.163585732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:18.166178 containerd[1508]: time="2026-01-28T02:30:18.165857286Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 28 02:30:18.167031 containerd[1508]: time="2026-01-28T02:30:18.166973826Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:18.171173 containerd[1508]: time="2026-01-28T02:30:18.171058579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:18.173578 containerd[1508]: time="2026-01-28T02:30:18.172902134Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.708494494s" Jan 28 02:30:18.173578 containerd[1508]: time="2026-01-28T02:30:18.172973647Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 02:30:21.957602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 02:30:21.968493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:30:22.185426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:30:22.196841 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:30:22.235505 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:30:22.273726 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 02:30:22.274088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:30:22.285487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:30:22.315386 systemd[1]: Reloading requested from client PID 2212 ('systemctl') (unit session-11.scope)... Jan 28 02:30:22.315428 systemd[1]: Reloading... Jan 28 02:30:22.577470 zram_generator::config[2251]: No configuration found. Jan 28 02:30:22.668782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 02:30:22.778126 systemd[1]: Reloading finished in 462 ms. Jan 28 02:30:22.854442 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 02:30:22.854624 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 02:30:22.855083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:30:22.862511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:30:23.193853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:30:23.210668 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 02:30:23.284486 kubelet[2317]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:30:23.284486 kubelet[2317]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 02:30:23.284486 kubelet[2317]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:30:23.317350 kubelet[2317]: I0128 02:30:23.317246 2317 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 02:30:23.949015 kubelet[2317]: I0128 02:30:23.948932 2317 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 02:30:23.949335 kubelet[2317]: I0128 02:30:23.949315 2317 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 02:30:23.949857 kubelet[2317]: I0128 02:30:23.949832 2317 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 02:30:23.985188 kubelet[2317]: E0128 02:30:23.984311 2317 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.34.254:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.34.254:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:30:23.985632 kubelet[2317]: I0128 02:30:23.985590 2317 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 02:30:24.006425 kubelet[2317]: E0128 02:30:24.006363 2317 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 02:30:24.006659 kubelet[2317]: I0128 02:30:24.006635 2317 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 02:30:24.015703 kubelet[2317]: I0128 02:30:24.015679 2317 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 02:30:24.020771 kubelet[2317]: I0128 02:30:24.020717 2317 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 02:30:24.021180 kubelet[2317]: I0128 02:30:24.020876 2317 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-hg60y.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 02:30:24.021564 kubelet[2317]: I0128 02:30:24.021544 2317 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 02:30:24.021700 kubelet[2317]: I0128 02:30:24.021681 2317 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 02:30:24.023251 kubelet[2317]: I0128 02:30:24.023194 2317 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:30:24.027991 kubelet[2317]: I0128 02:30:24.027962 2317 kubelet.go:446] "Attempting to sync node with API server" Jan 28 02:30:24.028083 kubelet[2317]: I0128 02:30:24.028019 2317 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 02:30:24.028083 kubelet[2317]: I0128 02:30:24.028058 2317 kubelet.go:352] "Adding apiserver pod source" Jan 28 02:30:24.028210 kubelet[2317]: I0128 02:30:24.028092 2317 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 02:30:24.036167 kubelet[2317]: I0128 02:30:24.034438 2317 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 02:30:24.038003 kubelet[2317]: I0128 02:30:24.037639 2317 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 02:30:24.038003 kubelet[2317]: W0128 02:30:24.037751 2317 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 02:30:24.039421 kubelet[2317]: I0128 02:30:24.038770 2317 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 02:30:24.039421 kubelet[2317]: I0128 02:30:24.038838 2317 server.go:1287] "Started kubelet" Jan 28 02:30:24.039421 kubelet[2317]: W0128 02:30:24.039027 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.34.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.34.254:6443: connect: connection refused Jan 28 02:30:24.039421 kubelet[2317]: E0128 02:30:24.039121 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.34.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.34.254:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:30:24.039421 kubelet[2317]: W0128 02:30:24.039241 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.34.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hg60y.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.34.254:6443: connect: connection refused Jan 28 02:30:24.039421 kubelet[2317]: E0128 02:30:24.039293 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.34.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hg60y.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.34.254:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:30:24.044637 kubelet[2317]: I0128 02:30:24.044568 2317 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 02:30:24.045452 kubelet[2317]: I0128 02:30:24.045427 2317 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 02:30:24.050688 kubelet[2317]: E0128 02:30:24.046949 2317 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.34.254:6443/api/v1/namespaces/default/events\": dial tcp 10.230.34.254:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-hg60y.gb1.brightbox.com.188ec43253efa281 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hg60y.gb1.brightbox.com,UID:srv-hg60y.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-hg60y.gb1.brightbox.com,},FirstTimestamp:2026-01-28 02:30:24.038806145 +0000 UTC m=+0.823888007,LastTimestamp:2026-01-28 02:30:24.038806145 +0000 UTC m=+0.823888007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hg60y.gb1.brightbox.com,}" Jan 28 02:30:24.050943 kubelet[2317]: I0128 02:30:24.050899 2317 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 02:30:24.052111 kubelet[2317]: I0128 02:30:24.052084 2317 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 02:30:24.052556 kubelet[2317]: I0128 02:30:24.052522 2317 server.go:479] "Adding debug handlers to kubelet server" Jan 28 02:30:24.055022 kubelet[2317]: I0128 02:30:24.054950 2317 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 02:30:24.061732 kubelet[2317]: I0128 02:30:24.061705 2317 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 02:30:24.062045 kubelet[2317]: E0128 02:30:24.062003 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-hg60y.gb1.brightbox.com\" not found" Jan 28 02:30:24.063407 kubelet[2317]: E0128 02:30:24.063356 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.34.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hg60y.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.34.254:6443: connect: connection refused" interval="200ms" Jan 28 02:30:24.063820 kubelet[2317]: I0128 02:30:24.063794 2317 factory.go:221] Registration of the systemd container factory successfully Jan 28 02:30:24.064054 kubelet[2317]: I0128 02:30:24.064018 2317 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 02:30:24.065747 kubelet[2317]: I0128 02:30:24.065713 2317 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 02:30:24.065855 kubelet[2317]: I0128 02:30:24.065800 2317 reconciler.go:26] "Reconciler: start to sync state" Jan 28 02:30:24.076923 kubelet[2317]: W0128 02:30:24.076789 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.34.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.34.254:6443: connect: connection refused Jan 28 02:30:24.076923 kubelet[2317]: E0128 02:30:24.076861 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.34.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.34.254:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:30:24.081593 kubelet[2317]: E0128 02:30:24.080276 2317 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 02:30:24.081593 kubelet[2317]: I0128 02:30:24.080469 2317 factory.go:221] Registration of the containerd container factory successfully Jan 28 02:30:24.094222 kubelet[2317]: I0128 02:30:24.094171 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 02:30:24.097623 kubelet[2317]: I0128 02:30:24.097593 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 02:30:24.097776 kubelet[2317]: I0128 02:30:24.097755 2317 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 02:30:24.097927 kubelet[2317]: I0128 02:30:24.097895 2317 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 02:30:24.098055 kubelet[2317]: I0128 02:30:24.098038 2317 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 02:30:24.098273 kubelet[2317]: E0128 02:30:24.098242 2317 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 02:30:24.115394 kubelet[2317]: W0128 02:30:24.115304 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.34.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.34.254:6443: connect: connection refused Jan 28 02:30:24.115622 kubelet[2317]: E0128 02:30:24.115580 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.34.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.34.254:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:30:24.124235 kubelet[2317]: I0128 02:30:24.124197 2317 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 02:30:24.124235 kubelet[2317]: I0128 02:30:24.124234 2317 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 02:30:24.124386 kubelet[2317]: I0128 02:30:24.124272 2317 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:30:24.126322 kubelet[2317]: I0128 02:30:24.126265 2317 policy_none.go:49] "None policy: Start" Jan 28 02:30:24.126322 kubelet[2317]: I0128 02:30:24.126311 2317 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 02:30:24.126475 kubelet[2317]: I0128 02:30:24.126341 2317 state_mem.go:35] "Initializing new in-memory state store" Jan 28 02:30:24.135827 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 02:30:24.153360 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 02:30:24.159758 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 02:30:24.163055 kubelet[2317]: E0128 02:30:24.163021 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-hg60y.gb1.brightbox.com\" not found" Jan 28 02:30:24.168706 kubelet[2317]: I0128 02:30:24.168407 2317 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 02:30:24.168806 kubelet[2317]: I0128 02:30:24.168755 2317 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 02:30:24.168869 kubelet[2317]: I0128 02:30:24.168789 2317 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 02:30:24.169318 kubelet[2317]: I0128 02:30:24.169283 2317 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 02:30:24.171883 kubelet[2317]: E0128 02:30:24.171850 2317 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 02:30:24.171969 kubelet[2317]: E0128 02:30:24.171944 2317 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-hg60y.gb1.brightbox.com\" not found" Jan 28 02:30:24.213842 systemd[1]: Created slice kubepods-burstable-pod2c3d7f2e0db73f844ff8135d7c5aba69.slice - libcontainer container kubepods-burstable-pod2c3d7f2e0db73f844ff8135d7c5aba69.slice. Jan 28 02:30:24.232631 kubelet[2317]: E0128 02:30:24.232583 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hg60y.gb1.brightbox.com\" not found" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.237745 systemd[1]: Created slice kubepods-burstable-podf39da272ef74ab07e7504d1c5decb104.slice - libcontainer container kubepods-burstable-podf39da272ef74ab07e7504d1c5decb104.slice. Jan 28 02:30:24.241302 kubelet[2317]: E0128 02:30:24.241272 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hg60y.gb1.brightbox.com\" not found" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.243695 systemd[1]: Created slice kubepods-burstable-pod4868daa66997ef626fc1f96b02ad71b0.slice - libcontainer container kubepods-burstable-pod4868daa66997ef626fc1f96b02ad71b0.slice. Jan 28 02:30:24.246571 kubelet[2317]: E0128 02:30:24.246300 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hg60y.gb1.brightbox.com\" not found" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.264331 kubelet[2317]: E0128 02:30:24.264282 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.34.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hg60y.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.34.254:6443: connect: connection refused" interval="400ms" Jan 28 02:30:24.266869 kubelet[2317]: I0128 02:30:24.266824 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f39da272ef74ab07e7504d1c5decb104-flexvolume-dir\") pod \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" (UID: \"f39da272ef74ab07e7504d1c5decb104\") " pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.266966 kubelet[2317]: I0128 02:30:24.266887 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f39da272ef74ab07e7504d1c5decb104-kubeconfig\") pod \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" (UID: \"f39da272ef74ab07e7504d1c5decb104\") " pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.266966 kubelet[2317]: I0128 02:30:24.266917 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4868daa66997ef626fc1f96b02ad71b0-kubeconfig\") pod \"kube-scheduler-srv-hg60y.gb1.brightbox.com\" (UID: \"4868daa66997ef626fc1f96b02ad71b0\") " pod="kube-system/kube-scheduler-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.266966 kubelet[2317]: I0128 02:30:24.266941 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c3d7f2e0db73f844ff8135d7c5aba69-ca-certs\") pod \"kube-apiserver-srv-hg60y.gb1.brightbox.com\" (UID: \"2c3d7f2e0db73f844ff8135d7c5aba69\") " pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.267131 kubelet[2317]: I0128 02:30:24.266971 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c3d7f2e0db73f844ff8135d7c5aba69-usr-share-ca-certificates\") pod \"kube-apiserver-srv-hg60y.gb1.brightbox.com\" (UID: \"2c3d7f2e0db73f844ff8135d7c5aba69\") " pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.267131 kubelet[2317]: I0128 02:30:24.266998 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f39da272ef74ab07e7504d1c5decb104-ca-certs\") pod \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" (UID: \"f39da272ef74ab07e7504d1c5decb104\") " pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.267131 kubelet[2317]: I0128 02:30:24.267024 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f39da272ef74ab07e7504d1c5decb104-k8s-certs\") pod \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" (UID: \"f39da272ef74ab07e7504d1c5decb104\") " pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.267131 kubelet[2317]: I0128 02:30:24.267049 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f39da272ef74ab07e7504d1c5decb104-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" (UID: \"f39da272ef74ab07e7504d1c5decb104\") " pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.267131 kubelet[2317]: I0128 02:30:24.267077 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c3d7f2e0db73f844ff8135d7c5aba69-k8s-certs\") pod \"kube-apiserver-srv-hg60y.gb1.brightbox.com\" (UID: \"2c3d7f2e0db73f844ff8135d7c5aba69\") " pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.272793 kubelet[2317]: I0128 02:30:24.272327 2317 kubelet_node_status.go:75] "Attempting to register node" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.273102 kubelet[2317]: E0128 02:30:24.273072 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.34.254:6443/api/v1/nodes\": dial tcp 10.230.34.254:6443: connect: connection refused" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.477272 kubelet[2317]: I0128 02:30:24.476670 2317 kubelet_node_status.go:75] "Attempting to register node" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.477272 kubelet[2317]: E0128 02:30:24.477117 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.34.254:6443/api/v1/nodes\": dial tcp 10.230.34.254:6443: connect: connection refused" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.540186 containerd[1508]: time="2026-01-28T02:30:24.539654901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-hg60y.gb1.brightbox.com,Uid:2c3d7f2e0db73f844ff8135d7c5aba69,Namespace:kube-system,Attempt:0,}" Jan 28 02:30:24.543075 containerd[1508]: time="2026-01-28T02:30:24.543038669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-hg60y.gb1.brightbox.com,Uid:f39da272ef74ab07e7504d1c5decb104,Namespace:kube-system,Attempt:0,}" Jan 28 02:30:24.548907 containerd[1508]: time="2026-01-28T02:30:24.548460864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-hg60y.gb1.brightbox.com,Uid:4868daa66997ef626fc1f96b02ad71b0,Namespace:kube-system,Attempt:0,}" Jan 28 02:30:24.665749 kubelet[2317]: E0128 02:30:24.665696 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.34.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hg60y.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.34.254:6443: connect: connection refused" interval="800ms" Jan 28 02:30:24.880918 kubelet[2317]: I0128 02:30:24.880702 2317 kubelet_node_status.go:75] "Attempting to register node" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:24.881298 kubelet[2317]: E0128 02:30:24.881218 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.34.254:6443/api/v1/nodes\": dial tcp 10.230.34.254:6443: connect: connection refused" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:25.065335 kubelet[2317]: W0128 02:30:25.065226 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.34.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hg60y.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.34.254:6443: connect: connection refused Jan 28 02:30:25.065479 kubelet[2317]: E0128 02:30:25.065371 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.34.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hg60y.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.34.254:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:30:25.077497 kubelet[2317]: W0128 02:30:25.077396 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.34.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.34.254:6443: connect: connection refused Jan 28 02:30:25.077497 kubelet[2317]: E0128 02:30:25.077442 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.34.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.34.254:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:30:25.158810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540867511.mount: Deactivated successfully. Jan 28 02:30:25.179465 containerd[1508]: time="2026-01-28T02:30:25.177549163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:30:25.180861 containerd[1508]: time="2026-01-28T02:30:25.180828368Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:30:25.181842 containerd[1508]: time="2026-01-28T02:30:25.181229833Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 02:30:25.183187 containerd[1508]: time="2026-01-28T02:30:25.183120903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 02:30:25.184140 containerd[1508]: time="2026-01-28T02:30:25.184067161Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:30:25.186201 containerd[1508]: time="2026-01-28T02:30:25.186110564Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:30:25.186941 containerd[1508]: time="2026-01-28T02:30:25.186888523Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 28 02:30:25.194423 containerd[1508]: time="2026-01-28T02:30:25.194350863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:30:25.198874 containerd[1508]: time="2026-01-28T02:30:25.198489426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 658.649493ms" Jan 28 02:30:25.199273 containerd[1508]: time="2026-01-28T02:30:25.199195586Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 650.613728ms" Jan 28 02:30:25.199781 containerd[1508]: time="2026-01-28T02:30:25.199730631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.601495ms" Jan 28 02:30:25.348103 kubelet[2317]: W0128 02:30:25.348017 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.34.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.34.254:6443: connect: connection refused Jan 28 02:30:25.348376 kubelet[2317]: E0128 02:30:25.348124 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.34.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.34.254:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:30:25.457962 containerd[1508]: time="2026-01-28T02:30:25.457845026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:30:25.458401 containerd[1508]: time="2026-01-28T02:30:25.458267110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:30:25.458616 containerd[1508]: time="2026-01-28T02:30:25.458380383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:30:25.462504 containerd[1508]: time="2026-01-28T02:30:25.462307552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:30:25.466848 kubelet[2317]: E0128 02:30:25.466666 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.34.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hg60y.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.34.254:6443: connect: connection refused" interval="1.6s" Jan 28 02:30:25.472295 containerd[1508]: time="2026-01-28T02:30:25.471576270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:30:25.472295 containerd[1508]: time="2026-01-28T02:30:25.471631466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:30:25.472295 containerd[1508]: time="2026-01-28T02:30:25.471654703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:30:25.472295 containerd[1508]: time="2026-01-28T02:30:25.471751569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:30:25.472295 containerd[1508]: time="2026-01-28T02:30:25.470965966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:30:25.472295 containerd[1508]: time="2026-01-28T02:30:25.471037870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:30:25.472295 containerd[1508]: time="2026-01-28T02:30:25.471054048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:30:25.472295 containerd[1508]: time="2026-01-28T02:30:25.471244463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:30:25.530856 systemd[1]: Started cri-containerd-b3736480e57a5519951eebf39da63c912c4fe56b7446c0a6d808959decb46b6e.scope - libcontainer container b3736480e57a5519951eebf39da63c912c4fe56b7446c0a6d808959decb46b6e. Jan 28 02:30:25.541809 systemd[1]: Started cri-containerd-10b830a7319608ac3635d6a4a56902d8a28b85a41631bd80389ecf459f2fcbb6.scope - libcontainer container 10b830a7319608ac3635d6a4a56902d8a28b85a41631bd80389ecf459f2fcbb6. Jan 28 02:30:25.550424 systemd[1]: Started cri-containerd-8f65d92eb989f94b47d5a7ed0381a621e73b334078be253c6137184114309b7a.scope - libcontainer container 8f65d92eb989f94b47d5a7ed0381a621e73b334078be253c6137184114309b7a. Jan 28 02:30:25.642007 containerd[1508]: time="2026-01-28T02:30:25.641928204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-hg60y.gb1.brightbox.com,Uid:f39da272ef74ab07e7504d1c5decb104,Namespace:kube-system,Attempt:0,} returns sandbox id \"10b830a7319608ac3635d6a4a56902d8a28b85a41631bd80389ecf459f2fcbb6\"" Jan 28 02:30:25.645489 kubelet[2317]: W0128 02:30:25.645283 2317 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.34.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.34.254:6443: connect: connection refused Jan 28 02:30:25.645489 kubelet[2317]: E0128 02:30:25.645424 2317 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.34.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.34.254:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:30:25.654335 containerd[1508]: time="2026-01-28T02:30:25.653811905Z" level=info msg="CreateContainer within sandbox \"10b830a7319608ac3635d6a4a56902d8a28b85a41631bd80389ecf459f2fcbb6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 02:30:25.666262 containerd[1508]: time="2026-01-28T02:30:25.665958777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-hg60y.gb1.brightbox.com,Uid:2c3d7f2e0db73f844ff8135d7c5aba69,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f65d92eb989f94b47d5a7ed0381a621e73b334078be253c6137184114309b7a\"" Jan 28 02:30:25.680441 containerd[1508]: time="2026-01-28T02:30:25.678544858Z" level=info msg="CreateContainer within sandbox \"8f65d92eb989f94b47d5a7ed0381a621e73b334078be253c6137184114309b7a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 02:30:25.687113 kubelet[2317]: I0128 02:30:25.687060 2317 kubelet_node_status.go:75] "Attempting to register node" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:25.687883 kubelet[2317]: E0128 02:30:25.687832 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.34.254:6443/api/v1/nodes\": dial tcp 10.230.34.254:6443: connect: connection refused" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:25.692676 containerd[1508]: time="2026-01-28T02:30:25.692624766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-hg60y.gb1.brightbox.com,Uid:4868daa66997ef626fc1f96b02ad71b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3736480e57a5519951eebf39da63c912c4fe56b7446c0a6d808959decb46b6e\"" Jan 28 02:30:25.697071 containerd[1508]: time="2026-01-28T02:30:25.697031213Z" level=info msg="CreateContainer within sandbox \"b3736480e57a5519951eebf39da63c912c4fe56b7446c0a6d808959decb46b6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 02:30:25.724011 containerd[1508]: time="2026-01-28T02:30:25.722950494Z" level=info msg="CreateContainer within sandbox \"10b830a7319608ac3635d6a4a56902d8a28b85a41631bd80389ecf459f2fcbb6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5bb26b5ea244a38bd76f366f6a4f57b77131f06f1221044d261bf09dd96c87d4\"" Jan 28 02:30:25.725190 containerd[1508]: time="2026-01-28T02:30:25.725026014Z" level=info msg="StartContainer for \"5bb26b5ea244a38bd76f366f6a4f57b77131f06f1221044d261bf09dd96c87d4\"" Jan 28 02:30:25.730346 containerd[1508]: time="2026-01-28T02:30:25.730305143Z" level=info msg="CreateContainer within sandbox \"8f65d92eb989f94b47d5a7ed0381a621e73b334078be253c6137184114309b7a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6361a03878e4727182b7dfceb208a65de2b1ab78e563a88fb1740b431436031b\"" Jan 28 02:30:25.737129 containerd[1508]: time="2026-01-28T02:30:25.736955634Z" level=info msg="StartContainer for \"6361a03878e4727182b7dfceb208a65de2b1ab78e563a88fb1740b431436031b\"" Jan 28 02:30:25.740922 containerd[1508]: time="2026-01-28T02:30:25.740806210Z" level=info msg="CreateContainer within sandbox \"b3736480e57a5519951eebf39da63c912c4fe56b7446c0a6d808959decb46b6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c56c1bdf6e2a662e420673cf6dba02201ea762ce2179ff1fef54120d4e10847f\"" Jan 28 02:30:25.742290 containerd[1508]: time="2026-01-28T02:30:25.742134904Z" level=info msg="StartContainer for \"c56c1bdf6e2a662e420673cf6dba02201ea762ce2179ff1fef54120d4e10847f\"" Jan 28 02:30:25.790122 systemd[1]: Started cri-containerd-5bb26b5ea244a38bd76f366f6a4f57b77131f06f1221044d261bf09dd96c87d4.scope - libcontainer container 5bb26b5ea244a38bd76f366f6a4f57b77131f06f1221044d261bf09dd96c87d4. Jan 28 02:30:25.800416 systemd[1]: Started cri-containerd-6361a03878e4727182b7dfceb208a65de2b1ab78e563a88fb1740b431436031b.scope - libcontainer container 6361a03878e4727182b7dfceb208a65de2b1ab78e563a88fb1740b431436031b. Jan 28 02:30:25.810340 systemd[1]: Started cri-containerd-c56c1bdf6e2a662e420673cf6dba02201ea762ce2179ff1fef54120d4e10847f.scope - libcontainer container c56c1bdf6e2a662e420673cf6dba02201ea762ce2179ff1fef54120d4e10847f. Jan 28 02:30:25.915610 containerd[1508]: time="2026-01-28T02:30:25.915504802Z" level=info msg="StartContainer for \"5bb26b5ea244a38bd76f366f6a4f57b77131f06f1221044d261bf09dd96c87d4\" returns successfully" Jan 28 02:30:25.921473 containerd[1508]: time="2026-01-28T02:30:25.921280788Z" level=info msg="StartContainer for \"6361a03878e4727182b7dfceb208a65de2b1ab78e563a88fb1740b431436031b\" returns successfully" Jan 28 02:30:25.940930 containerd[1508]: time="2026-01-28T02:30:25.940869972Z" level=info msg="StartContainer for \"c56c1bdf6e2a662e420673cf6dba02201ea762ce2179ff1fef54120d4e10847f\" returns successfully" Jan 28 02:30:26.131365 kubelet[2317]: E0128 02:30:26.130798 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hg60y.gb1.brightbox.com\" not found" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:26.133115 kubelet[2317]: E0128 02:30:26.133083 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hg60y.gb1.brightbox.com\" not found" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:26.137948 kubelet[2317]: E0128 02:30:26.137919 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hg60y.gb1.brightbox.com\" not found" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:26.163369 kubelet[2317]: E0128 02:30:26.163323 2317 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.34.254:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.34.254:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:30:27.139616 kubelet[2317]: E0128 02:30:27.139566 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hg60y.gb1.brightbox.com\" not found" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:27.140109 kubelet[2317]: E0128 02:30:27.139983 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hg60y.gb1.brightbox.com\" not found" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:27.291275 kubelet[2317]: I0128 02:30:27.291234 2317 kubelet_node_status.go:75] "Attempting to register node" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:28.143684 kubelet[2317]: E0128 02:30:28.143544 2317 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hg60y.gb1.brightbox.com\" not found" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:29.737404 kubelet[2317]: E0128 02:30:29.737276 2317 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-hg60y.gb1.brightbox.com\" not found" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:29.810203 kubelet[2317]: I0128 02:30:29.810034 2317 kubelet_node_status.go:78] "Successfully registered node" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:29.810203 kubelet[2317]: E0128 02:30:29.810096 2317 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-hg60y.gb1.brightbox.com\": node \"srv-hg60y.gb1.brightbox.com\" not found" Jan 28 02:30:29.863337 kubelet[2317]: I0128 02:30:29.863274 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:29.876428 kubelet[2317]: E0128 02:30:29.876353 2317 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-hg60y.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:29.876428 kubelet[2317]: I0128 02:30:29.876395 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:29.883184 kubelet[2317]: E0128 02:30:29.882362 2317 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:29.883184 kubelet[2317]: I0128 02:30:29.882413 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:29.885027 kubelet[2317]: E0128 02:30:29.884971 2317 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-hg60y.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:30.033855 kubelet[2317]: I0128 02:30:30.033640 2317 apiserver.go:52] "Watching apiserver" Jan 28 02:30:30.066434 kubelet[2317]: I0128 02:30:30.066363 2317 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 02:30:30.761879 kubelet[2317]: I0128 02:30:30.761825 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:30.777605 kubelet[2317]: W0128 02:30:30.777567 2317 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:30:31.684099 kubelet[2317]: I0128 02:30:31.683615 2317 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:31.698307 kubelet[2317]: W0128 02:30:31.698248 2317 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:30:34.147397 kubelet[2317]: I0128 02:30:34.147263 2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-hg60y.gb1.brightbox.com" podStartSLOduration=4.147219736 podStartE2EDuration="4.147219736s" podCreationTimestamp="2026-01-28 02:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:30:34.13474691 +0000 UTC m=+10.919828778" watchObservedRunningTime="2026-01-28 02:30:34.147219736 +0000 UTC m=+10.932301585" Jan 28 02:30:34.148255 kubelet[2317]: I0128 02:30:34.147453 2317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" podStartSLOduration=3.147444647 podStartE2EDuration="3.147444647s" podCreationTimestamp="2026-01-28 02:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:30:34.14687218 +0000 UTC m=+10.931954058" watchObservedRunningTime="2026-01-28 02:30:34.147444647 +0000 UTC m=+10.932526526" Jan 28 02:30:37.796520 systemd[1]: Reloading requested from client PID 2597 ('systemctl') (unit session-11.scope)... Jan 28 02:30:37.796567 systemd[1]: Reloading... Jan 28 02:30:37.954204 zram_generator::config[2645]: No configuration found. Jan 28 02:30:38.117990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 02:30:38.254180 systemd[1]: Reloading finished in 456 ms. Jan 28 02:30:38.314673 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:30:38.320487 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 02:30:38.320880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:30:38.320977 systemd[1]: kubelet.service: Consumed 1.537s CPU time, 128.8M memory peak, 0B memory swap peak. Jan 28 02:30:38.336640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:30:38.577741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:30:38.589688 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 02:30:38.724364 kubelet[2700]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:30:38.724364 kubelet[2700]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 02:30:38.724364 kubelet[2700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:30:38.725132 kubelet[2700]: I0128 02:30:38.724538 2700 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 02:30:38.740171 kubelet[2700]: I0128 02:30:38.738839 2700 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 02:30:38.740171 kubelet[2700]: I0128 02:30:38.738875 2700 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 02:30:38.744237 kubelet[2700]: I0128 02:30:38.744211 2700 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 02:30:38.747976 kubelet[2700]: I0128 02:30:38.747949 2700 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 02:30:38.754924 kubelet[2700]: I0128 02:30:38.754889 2700 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 02:30:38.761444 kubelet[2700]: E0128 02:30:38.761372 2700 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 02:30:38.761444 kubelet[2700]: I0128 02:30:38.761444 2700 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 02:30:38.767913 kubelet[2700]: I0128 02:30:38.767856 2700 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 02:30:38.768279 kubelet[2700]: I0128 02:30:38.768227 2700 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 02:30:38.768552 kubelet[2700]: I0128 02:30:38.768282 2700 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-hg60y.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 02:30:38.768921 kubelet[2700]: I0128 02:30:38.768559 2700 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 02:30:38.768921 kubelet[2700]: I0128 02:30:38.768579 2700 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 02:30:38.768921 kubelet[2700]: I0128 02:30:38.768697 2700 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:30:38.769062 kubelet[2700]: I0128 02:30:38.768967 2700 kubelet.go:446] "Attempting to sync node with API server" Jan 28 02:30:38.769863 kubelet[2700]: I0128 02:30:38.769693 2700 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 02:30:38.769863 kubelet[2700]: I0128 02:30:38.769742 2700 kubelet.go:352] "Adding apiserver pod source" Jan 28 02:30:38.769863 kubelet[2700]: I0128 02:30:38.769771 2700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 02:30:38.772166 kubelet[2700]: I0128 02:30:38.772103 2700 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 02:30:38.772840 kubelet[2700]: I0128 02:30:38.772697 2700 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 02:30:38.785328 kubelet[2700]: I0128 02:30:38.785293 2700 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 02:30:38.785434 kubelet[2700]: I0128 02:30:38.785359 2700 server.go:1287] "Started kubelet" Jan 28 02:30:38.798020 kubelet[2700]: I0128 02:30:38.797771 2700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 02:30:38.812062 kubelet[2700]: I0128 02:30:38.810359 2700 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 02:30:38.814853 kubelet[2700]: I0128 02:30:38.814822 2700 server.go:479] "Adding debug handlers to kubelet server" Jan 28 02:30:38.818528 kubelet[2700]: I0128 02:30:38.818441 2700 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 02:30:38.818931 kubelet[2700]: I0128 02:30:38.818896 2700 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 02:30:38.819445 kubelet[2700]: I0128 02:30:38.819406 2700 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 02:30:38.830180 kubelet[2700]: E0128 02:30:38.827297 2700 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 02:30:38.830180 kubelet[2700]: I0128 02:30:38.829409 2700 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 02:30:38.830860 kubelet[2700]: E0128 02:30:38.830802 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-hg60y.gb1.brightbox.com\" not found" Jan 28 02:30:38.831588 kubelet[2700]: I0128 02:30:38.831480 2700 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 02:30:38.832176 kubelet[2700]: I0128 02:30:38.831728 2700 reconciler.go:26] "Reconciler: start to sync state" Jan 28 02:30:38.847179 kubelet[2700]: I0128 02:30:38.845395 2700 factory.go:221] Registration of the containerd container factory successfully Jan 28 02:30:38.847179 kubelet[2700]: I0128 02:30:38.845430 2700 factory.go:221] Registration of the systemd container factory successfully Jan 28 02:30:38.847179 kubelet[2700]: I0128 02:30:38.845650 2700 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 02:30:38.848444 kubelet[2700]: I0128 02:30:38.848398 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 02:30:38.852696 kubelet[2700]: I0128 02:30:38.852668 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 02:30:38.852781 kubelet[2700]: I0128 02:30:38.852726 2700 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 02:30:38.852781 kubelet[2700]: I0128 02:30:38.852763 2700 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 02:30:38.852781 kubelet[2700]: I0128 02:30:38.852779 2700 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 02:30:38.852943 kubelet[2700]: E0128 02:30:38.852849 2700 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 02:30:38.938678 kubelet[2700]: I0128 02:30:38.938600 2700 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 02:30:38.938678 kubelet[2700]: I0128 02:30:38.938649 2700 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 02:30:38.938678 kubelet[2700]: I0128 02:30:38.938692 2700 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:30:38.939024 kubelet[2700]: I0128 02:30:38.938988 2700 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 02:30:38.939113 kubelet[2700]: I0128 02:30:38.939024 2700 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 02:30:38.939113 kubelet[2700]: I0128 02:30:38.939073 2700 policy_none.go:49] "None policy: Start" Jan 28 02:30:38.939113 kubelet[2700]: I0128 02:30:38.939107 2700 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 02:30:38.939642 kubelet[2700]: I0128 02:30:38.939140 2700 state_mem.go:35] "Initializing new in-memory state store" Jan 28 02:30:38.940942 kubelet[2700]: I0128 02:30:38.940901 2700 state_mem.go:75] "Updated machine memory state" Jan 28 02:30:38.951997 kubelet[2700]: I0128 02:30:38.951965 2700 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 02:30:38.953196 kubelet[2700]: E0128 02:30:38.953132 2700 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 02:30:38.953650 kubelet[2700]: I0128 02:30:38.953520 2700 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 02:30:38.953960 kubelet[2700]: I0128 02:30:38.953889 2700 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 02:30:38.954448 kubelet[2700]: I0128 02:30:38.954427 2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 02:30:38.962317 kubelet[2700]: E0128 02:30:38.962281 2700 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 02:30:39.084606 kubelet[2700]: I0128 02:30:39.082689 2700 kubelet_node_status.go:75] "Attempting to register node" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.094774 kubelet[2700]: I0128 02:30:39.094741 2700 kubelet_node_status.go:124] "Node was previously registered" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.095062 kubelet[2700]: I0128 02:30:39.095009 2700 kubelet_node_status.go:78] "Successfully registered node" node="srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.156194 kubelet[2700]: I0128 02:30:39.155850 2700 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.159173 kubelet[2700]: I0128 02:30:39.156864 2700 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.159173 kubelet[2700]: I0128 02:30:39.157298 2700 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.170161 kubelet[2700]: W0128 02:30:39.170113 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:30:39.170436 kubelet[2700]: E0128 02:30:39.170410 2700 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.172814 kubelet[2700]: W0128 02:30:39.172792 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:30:39.175506 kubelet[2700]: W0128 02:30:39.175473 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:30:39.176189 kubelet[2700]: E0128 02:30:39.175794 2700 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-hg60y.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.235385 kubelet[2700]: I0128 02:30:39.234944 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c3d7f2e0db73f844ff8135d7c5aba69-usr-share-ca-certificates\") pod \"kube-apiserver-srv-hg60y.gb1.brightbox.com\" (UID: \"2c3d7f2e0db73f844ff8135d7c5aba69\") " pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.235385 kubelet[2700]: I0128 02:30:39.235007 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f39da272ef74ab07e7504d1c5decb104-ca-certs\") pod \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" (UID: \"f39da272ef74ab07e7504d1c5decb104\") " pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.235385 kubelet[2700]: I0128 02:30:39.235052 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f39da272ef74ab07e7504d1c5decb104-flexvolume-dir\") pod \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" (UID: \"f39da272ef74ab07e7504d1c5decb104\") " pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.235385 kubelet[2700]: I0128 02:30:39.235083 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f39da272ef74ab07e7504d1c5decb104-kubeconfig\") pod \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" (UID: \"f39da272ef74ab07e7504d1c5decb104\") " pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.235385 kubelet[2700]: I0128 02:30:39.235114 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4868daa66997ef626fc1f96b02ad71b0-kubeconfig\") pod \"kube-scheduler-srv-hg60y.gb1.brightbox.com\" (UID: \"4868daa66997ef626fc1f96b02ad71b0\") " pod="kube-system/kube-scheduler-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.235794 kubelet[2700]: I0128 02:30:39.235141 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c3d7f2e0db73f844ff8135d7c5aba69-ca-certs\") pod \"kube-apiserver-srv-hg60y.gb1.brightbox.com\" (UID: \"2c3d7f2e0db73f844ff8135d7c5aba69\") " pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.235794 kubelet[2700]: I0128 02:30:39.235186 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c3d7f2e0db73f844ff8135d7c5aba69-k8s-certs\") pod \"kube-apiserver-srv-hg60y.gb1.brightbox.com\" (UID: \"2c3d7f2e0db73f844ff8135d7c5aba69\") " pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.235794 kubelet[2700]: I0128 02:30:39.235216 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f39da272ef74ab07e7504d1c5decb104-k8s-certs\") pod \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" (UID: \"f39da272ef74ab07e7504d1c5decb104\") " pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.235794 kubelet[2700]: I0128 02:30:39.235246 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f39da272ef74ab07e7504d1c5decb104-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-hg60y.gb1.brightbox.com\" (UID: \"f39da272ef74ab07e7504d1c5decb104\") " pod="kube-system/kube-controller-manager-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.783521 kubelet[2700]: I0128 02:30:39.782452 2700 apiserver.go:52] "Watching apiserver" Jan 28 02:30:39.832519 kubelet[2700]: I0128 02:30:39.832418 2700 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 02:30:39.904948 kubelet[2700]: I0128 02:30:39.904811 2700 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.919723 kubelet[2700]: W0128 02:30:39.919641 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:30:39.919903 kubelet[2700]: E0128 02:30:39.919736 2700 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-hg60y.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" Jan 28 02:30:39.956536 kubelet[2700]: I0128 02:30:39.956289 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-hg60y.gb1.brightbox.com" podStartSLOduration=0.95626305 podStartE2EDuration="956.26305ms" podCreationTimestamp="2026-01-28 02:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:30:39.931932946 +0000 UTC m=+1.278480503" watchObservedRunningTime="2026-01-28 02:30:39.95626305 +0000 UTC m=+1.302810572" Jan 28 02:30:44.382007 kubelet[2700]: I0128 02:30:44.380782 2700 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 02:30:44.383977 containerd[1508]: time="2026-01-28T02:30:44.381739965Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 02:30:44.385499 kubelet[2700]: I0128 02:30:44.383525 2700 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 02:30:48.653305 systemd[1]: Created slice kubepods-besteffort-pod9f9233e0_64ae_4366_b09d_288acb5ea5db.slice - libcontainer container kubepods-besteffort-pod9f9233e0_64ae_4366_b09d_288acb5ea5db.slice. Jan 28 02:30:48.691102 kubelet[2700]: I0128 02:30:48.691034 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f9233e0-64ae-4366-b09d-288acb5ea5db-kube-proxy\") pod \"kube-proxy-28zpw\" (UID: \"9f9233e0-64ae-4366-b09d-288acb5ea5db\") " pod="kube-system/kube-proxy-28zpw" Jan 28 02:30:48.691102 kubelet[2700]: I0128 02:30:48.691109 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f9233e0-64ae-4366-b09d-288acb5ea5db-xtables-lock\") pod \"kube-proxy-28zpw\" (UID: \"9f9233e0-64ae-4366-b09d-288acb5ea5db\") " pod="kube-system/kube-proxy-28zpw" Jan 28 02:30:48.695537 kubelet[2700]: I0128 02:30:48.692774 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f9233e0-64ae-4366-b09d-288acb5ea5db-lib-modules\") pod \"kube-proxy-28zpw\" (UID: \"9f9233e0-64ae-4366-b09d-288acb5ea5db\") " pod="kube-system/kube-proxy-28zpw" Jan 28 02:30:48.695537 kubelet[2700]: I0128 02:30:48.692846 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5nq\" (UniqueName: \"kubernetes.io/projected/9f9233e0-64ae-4366-b09d-288acb5ea5db-kube-api-access-rs5nq\") pod \"kube-proxy-28zpw\" (UID: \"9f9233e0-64ae-4366-b09d-288acb5ea5db\") " pod="kube-system/kube-proxy-28zpw" Jan 28 02:30:48.712452 systemd[1]: Created slice kubepods-besteffort-podfa520ba1_2c7f_48f0_9489_0389d0b0bf52.slice - libcontainer container kubepods-besteffort-podfa520ba1_2c7f_48f0_9489_0389d0b0bf52.slice. Jan 28 02:30:48.895289 kubelet[2700]: I0128 02:30:48.895207 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa520ba1-2c7f-48f0-9489-0389d0b0bf52-var-lib-calico\") pod \"tigera-operator-7dcd859c48-r7wb4\" (UID: \"fa520ba1-2c7f-48f0-9489-0389d0b0bf52\") " pod="tigera-operator/tigera-operator-7dcd859c48-r7wb4" Jan 28 02:30:48.895750 kubelet[2700]: I0128 02:30:48.895680 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l6zw\" (UniqueName: \"kubernetes.io/projected/fa520ba1-2c7f-48f0-9489-0389d0b0bf52-kube-api-access-7l6zw\") pod \"tigera-operator-7dcd859c48-r7wb4\" (UID: \"fa520ba1-2c7f-48f0-9489-0389d0b0bf52\") " pod="tigera-operator/tigera-operator-7dcd859c48-r7wb4" Jan 28 02:30:48.967138 containerd[1508]: time="2026-01-28T02:30:48.966876904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-28zpw,Uid:9f9233e0-64ae-4366-b09d-288acb5ea5db,Namespace:kube-system,Attempt:0,}" Jan 28 02:30:49.051738 containerd[1508]: time="2026-01-28T02:30:49.051127490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:30:49.051738 containerd[1508]: time="2026-01-28T02:30:49.051378834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:30:49.051738 containerd[1508]: time="2026-01-28T02:30:49.051427049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:30:49.052075 containerd[1508]: time="2026-01-28T02:30:49.051639870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:30:49.092392 systemd[1]: Started cri-containerd-bd25c9a6e5f4b0996d4287576bb865d709bbbd0df577901fb016dc69642180bd.scope - libcontainer container bd25c9a6e5f4b0996d4287576bb865d709bbbd0df577901fb016dc69642180bd. Jan 28 02:30:49.134948 containerd[1508]: time="2026-01-28T02:30:49.134870023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-28zpw,Uid:9f9233e0-64ae-4366-b09d-288acb5ea5db,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd25c9a6e5f4b0996d4287576bb865d709bbbd0df577901fb016dc69642180bd\"" Jan 28 02:30:49.142661 containerd[1508]: time="2026-01-28T02:30:49.142579647Z" level=info msg="CreateContainer within sandbox \"bd25c9a6e5f4b0996d4287576bb865d709bbbd0df577901fb016dc69642180bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 02:30:49.166068 containerd[1508]: time="2026-01-28T02:30:49.165898199Z" level=info msg="CreateContainer within sandbox \"bd25c9a6e5f4b0996d4287576bb865d709bbbd0df577901fb016dc69642180bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a5cd578afa4ac7fe34af8b2c69d91878a4f8db10df4b35e58c59f75d333a322\"" Jan 28 02:30:49.166996 containerd[1508]: time="2026-01-28T02:30:49.166866898Z" level=info msg="StartContainer for \"3a5cd578afa4ac7fe34af8b2c69d91878a4f8db10df4b35e58c59f75d333a322\"" Jan 28 02:30:49.205360 systemd[1]: Started cri-containerd-3a5cd578afa4ac7fe34af8b2c69d91878a4f8db10df4b35e58c59f75d333a322.scope - libcontainer container 3a5cd578afa4ac7fe34af8b2c69d91878a4f8db10df4b35e58c59f75d333a322. Jan 28 02:30:49.257912 containerd[1508]: time="2026-01-28T02:30:49.257649588Z" level=info msg="StartContainer for \"3a5cd578afa4ac7fe34af8b2c69d91878a4f8db10df4b35e58c59f75d333a322\" returns successfully" Jan 28 02:30:49.322244 containerd[1508]: time="2026-01-28T02:30:49.321131925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r7wb4,Uid:fa520ba1-2c7f-48f0-9489-0389d0b0bf52,Namespace:tigera-operator,Attempt:0,}" Jan 28 02:30:49.395660 containerd[1508]: time="2026-01-28T02:30:49.394905736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:30:49.395660 containerd[1508]: time="2026-01-28T02:30:49.395025924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:30:49.395660 containerd[1508]: time="2026-01-28T02:30:49.395114752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:30:49.395660 containerd[1508]: time="2026-01-28T02:30:49.395482106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:30:49.432380 systemd[1]: Started cri-containerd-83020c9089a0d27b3d2d073dff27081f1f1f03471a16770f67279b14bf061cfa.scope - libcontainer container 83020c9089a0d27b3d2d073dff27081f1f1f03471a16770f67279b14bf061cfa. Jan 28 02:30:49.523011 containerd[1508]: time="2026-01-28T02:30:49.522855931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r7wb4,Uid:fa520ba1-2c7f-48f0-9489-0389d0b0bf52,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"83020c9089a0d27b3d2d073dff27081f1f1f03471a16770f67279b14bf061cfa\"" Jan 28 02:30:49.530562 containerd[1508]: time="2026-01-28T02:30:49.530521477Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 02:30:49.964141 kubelet[2700]: I0128 02:30:49.964044 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-28zpw" podStartSLOduration=4.963996883 podStartE2EDuration="4.963996883s" podCreationTimestamp="2026-01-28 02:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:30:49.962237511 +0000 UTC m=+11.308785062" watchObservedRunningTime="2026-01-28 02:30:49.963996883 +0000 UTC m=+11.310544413" Jan 28 02:30:51.988772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651112316.mount: Deactivated successfully. Jan 28 02:30:53.011442 containerd[1508]: time="2026-01-28T02:30:53.011323597Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:53.013847 containerd[1508]: time="2026-01-28T02:30:53.013749526Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 28 02:30:53.015137 containerd[1508]: time="2026-01-28T02:30:53.015050216Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:53.025189 containerd[1508]: time="2026-01-28T02:30:53.024796783Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.494204101s" Jan 28 02:30:53.025189 containerd[1508]: time="2026-01-28T02:30:53.024868530Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 02:30:53.026094 containerd[1508]: time="2026-01-28T02:30:53.026016850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:30:53.030220 containerd[1508]: time="2026-01-28T02:30:53.030182196Z" level=info msg="CreateContainer within sandbox \"83020c9089a0d27b3d2d073dff27081f1f1f03471a16770f67279b14bf061cfa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 02:30:53.048557 containerd[1508]: time="2026-01-28T02:30:53.044418582Z" level=info msg="CreateContainer within sandbox \"83020c9089a0d27b3d2d073dff27081f1f1f03471a16770f67279b14bf061cfa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cbe4de452ee45c6e9fa8106ac2d0d95a124cf466edbc8937f4602af70433c38f\"" Jan 28 02:30:53.048557 containerd[1508]: time="2026-01-28T02:30:53.044920413Z" level=info msg="StartContainer for \"cbe4de452ee45c6e9fa8106ac2d0d95a124cf466edbc8937f4602af70433c38f\"" Jan 28 02:30:53.109445 systemd[1]: Started cri-containerd-cbe4de452ee45c6e9fa8106ac2d0d95a124cf466edbc8937f4602af70433c38f.scope - libcontainer container cbe4de452ee45c6e9fa8106ac2d0d95a124cf466edbc8937f4602af70433c38f. Jan 28 02:30:53.146187 containerd[1508]: time="2026-01-28T02:30:53.146045730Z" level=info msg="StartContainer for \"cbe4de452ee45c6e9fa8106ac2d0d95a124cf466edbc8937f4602af70433c38f\" returns successfully" Jan 28 02:30:53.971903 kubelet[2700]: I0128 02:30:53.971766 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-r7wb4" podStartSLOduration=5.472824659 podStartE2EDuration="8.97171263s" podCreationTimestamp="2026-01-28 02:30:45 +0000 UTC" firstStartedPulling="2026-01-28 02:30:49.528784904 +0000 UTC m=+10.875332426" lastFinishedPulling="2026-01-28 02:30:53.02767288 +0000 UTC m=+14.374220397" observedRunningTime="2026-01-28 02:30:53.970457291 +0000 UTC m=+15.317004821" watchObservedRunningTime="2026-01-28 02:30:53.97171263 +0000 UTC m=+15.318260154" Jan 28 02:30:59.758027 sudo[1763]: pam_unix(sudo:session): session closed for user root Jan 28 02:30:59.865851 sshd[1760]: pam_unix(sshd:session): session closed for user core Jan 28 02:30:59.875953 systemd[1]: sshd@8-10.230.34.254:22-68.220.241.50:41966.service: Deactivated successfully. Jan 28 02:30:59.881665 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 02:30:59.882252 systemd[1]: session-11.scope: Consumed 6.577s CPU time, 142.5M memory peak, 0B memory swap peak. Jan 28 02:30:59.884619 systemd-logind[1488]: Session 11 logged out. Waiting for processes to exit. Jan 28 02:30:59.887867 systemd-logind[1488]: Removed session 11. Jan 28 02:31:14.426971 systemd[1]: Created slice kubepods-besteffort-pod2e65a411_1cbc_46ba_86d1_c764af879c0a.slice - libcontainer container kubepods-besteffort-pod2e65a411_1cbc_46ba_86d1_c764af879c0a.slice. Jan 28 02:31:14.467539 kubelet[2700]: I0128 02:31:14.467466 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2e65a411-1cbc-46ba-86d1-c764af879c0a-typha-certs\") pod \"calico-typha-69f7bb8b5-qc6qd\" (UID: \"2e65a411-1cbc-46ba-86d1-c764af879c0a\") " pod="calico-system/calico-typha-69f7bb8b5-qc6qd" Jan 28 02:31:14.468714 kubelet[2700]: I0128 02:31:14.468554 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lknbs\" (UniqueName: \"kubernetes.io/projected/2e65a411-1cbc-46ba-86d1-c764af879c0a-kube-api-access-lknbs\") pod \"calico-typha-69f7bb8b5-qc6qd\" (UID: \"2e65a411-1cbc-46ba-86d1-c764af879c0a\") " pod="calico-system/calico-typha-69f7bb8b5-qc6qd" Jan 28 02:31:14.468714 kubelet[2700]: I0128 02:31:14.468649 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e65a411-1cbc-46ba-86d1-c764af879c0a-tigera-ca-bundle\") pod \"calico-typha-69f7bb8b5-qc6qd\" (UID: \"2e65a411-1cbc-46ba-86d1-c764af879c0a\") " pod="calico-system/calico-typha-69f7bb8b5-qc6qd" Jan 28 02:31:14.699258 systemd[1]: Created slice kubepods-besteffort-pod16e97c7a_7fae_44b9_b242_5844621f9c22.slice - libcontainer container kubepods-besteffort-pod16e97c7a_7fae_44b9_b242_5844621f9c22.slice. Jan 28 02:31:14.741996 containerd[1508]: time="2026-01-28T02:31:14.741733139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69f7bb8b5-qc6qd,Uid:2e65a411-1cbc-46ba-86d1-c764af879c0a,Namespace:calico-system,Attempt:0,}" Jan 28 02:31:14.770567 kubelet[2700]: I0128 02:31:14.770506 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/16e97c7a-7fae-44b9-b242-5844621f9c22-cni-log-dir\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.770567 kubelet[2700]: I0128 02:31:14.770559 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e97c7a-7fae-44b9-b242-5844621f9c22-tigera-ca-bundle\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.770772 kubelet[2700]: I0128 02:31:14.770588 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/16e97c7a-7fae-44b9-b242-5844621f9c22-var-lib-calico\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.770772 kubelet[2700]: I0128 02:31:14.770615 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldswb\" (UniqueName: \"kubernetes.io/projected/16e97c7a-7fae-44b9-b242-5844621f9c22-kube-api-access-ldswb\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.770772 kubelet[2700]: I0128 02:31:14.770648 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/16e97c7a-7fae-44b9-b242-5844621f9c22-flexvol-driver-host\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.770772 kubelet[2700]: I0128 02:31:14.770677 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e97c7a-7fae-44b9-b242-5844621f9c22-lib-modules\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.770772 kubelet[2700]: I0128 02:31:14.770703 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e97c7a-7fae-44b9-b242-5844621f9c22-xtables-lock\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.771037 kubelet[2700]: I0128 02:31:14.770733 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/16e97c7a-7fae-44b9-b242-5844621f9c22-cni-bin-dir\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.771037 kubelet[2700]: I0128 02:31:14.770757 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/16e97c7a-7fae-44b9-b242-5844621f9c22-var-run-calico\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.771037 kubelet[2700]: I0128 02:31:14.770784 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/16e97c7a-7fae-44b9-b242-5844621f9c22-cni-net-dir\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.771037 kubelet[2700]: I0128 02:31:14.770808 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/16e97c7a-7fae-44b9-b242-5844621f9c22-node-certs\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.771037 kubelet[2700]: I0128 02:31:14.770834 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/16e97c7a-7fae-44b9-b242-5844621f9c22-policysync\") pod \"calico-node-47dzk\" (UID: \"16e97c7a-7fae-44b9-b242-5844621f9c22\") " pod="calico-system/calico-node-47dzk" Jan 28 02:31:14.822246 containerd[1508]: time="2026-01-28T02:31:14.820269980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:31:14.822712 containerd[1508]: time="2026-01-28T02:31:14.822288769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:31:14.822712 containerd[1508]: time="2026-01-28T02:31:14.822340862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:14.822712 containerd[1508]: time="2026-01-28T02:31:14.822537110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:14.896459 kubelet[2700]: E0128 02:31:14.896100 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:14.896459 kubelet[2700]: W0128 02:31:14.896291 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:14.898255 kubelet[2700]: E0128 02:31:14.898175 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:14.938919 kubelet[2700]: E0128 02:31:14.938886 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:14.939117 kubelet[2700]: W0128 02:31:14.939092 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:14.939330 kubelet[2700]: E0128 02:31:14.939256 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:14.947136 systemd[1]: Started cri-containerd-127a97bb8b5f64bd00b46183a0c52a3fdf967f5b2ec06887025ac46c00a97df6.scope - libcontainer container 127a97bb8b5f64bd00b46183a0c52a3fdf967f5b2ec06887025ac46c00a97df6. Jan 28 02:31:15.009597 containerd[1508]: time="2026-01-28T02:31:15.009300903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47dzk,Uid:16e97c7a-7fae-44b9-b242-5844621f9c22,Namespace:calico-system,Attempt:0,}" Jan 28 02:31:15.012429 kubelet[2700]: E0128 02:31:15.012078 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:15.070966 kubelet[2700]: E0128 02:31:15.070544 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.070966 kubelet[2700]: W0128 02:31:15.070578 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.070966 kubelet[2700]: E0128 02:31:15.070628 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.072247 kubelet[2700]: E0128 02:31:15.071752 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.072247 kubelet[2700]: W0128 02:31:15.071772 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.072247 kubelet[2700]: E0128 02:31:15.071793 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.073258 kubelet[2700]: E0128 02:31:15.072766 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.073258 kubelet[2700]: W0128 02:31:15.072795 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.073258 kubelet[2700]: E0128 02:31:15.072813 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.074678 kubelet[2700]: E0128 02:31:15.074333 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.074678 kubelet[2700]: W0128 02:31:15.074353 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.074678 kubelet[2700]: E0128 02:31:15.074370 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.075595 kubelet[2700]: E0128 02:31:15.075348 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.075595 kubelet[2700]: W0128 02:31:15.075367 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.075595 kubelet[2700]: E0128 02:31:15.075385 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.076669 kubelet[2700]: E0128 02:31:15.076358 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.076669 kubelet[2700]: W0128 02:31:15.076387 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.076669 kubelet[2700]: E0128 02:31:15.076405 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.077793 kubelet[2700]: E0128 02:31:15.077507 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.077793 kubelet[2700]: W0128 02:31:15.077650 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.077793 kubelet[2700]: E0128 02:31:15.077671 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.078768 kubelet[2700]: E0128 02:31:15.078388 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.078768 kubelet[2700]: W0128 02:31:15.078515 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.078768 kubelet[2700]: E0128 02:31:15.078535 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.079706 kubelet[2700]: E0128 02:31:15.079302 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.079706 kubelet[2700]: W0128 02:31:15.079346 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.079706 kubelet[2700]: E0128 02:31:15.079364 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.080398 kubelet[2700]: E0128 02:31:15.080030 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.080398 kubelet[2700]: W0128 02:31:15.080049 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.080398 kubelet[2700]: E0128 02:31:15.080066 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.081394 kubelet[2700]: E0128 02:31:15.081226 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.081394 kubelet[2700]: W0128 02:31:15.081249 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.081394 kubelet[2700]: E0128 02:31:15.081275 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.083307 kubelet[2700]: E0128 02:31:15.082370 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.083307 kubelet[2700]: W0128 02:31:15.082389 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.083307 kubelet[2700]: E0128 02:31:15.082409 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.083307 kubelet[2700]: E0128 02:31:15.082833 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.083307 kubelet[2700]: W0128 02:31:15.082850 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.083307 kubelet[2700]: E0128 02:31:15.082868 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.083307 kubelet[2700]: I0128 02:31:15.082901 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5-socket-dir\") pod \"csi-node-driver-9vjdx\" (UID: \"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5\") " pod="calico-system/csi-node-driver-9vjdx" Jan 28 02:31:15.085175 kubelet[2700]: E0128 02:31:15.084270 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.085175 kubelet[2700]: W0128 02:31:15.084292 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.085175 kubelet[2700]: E0128 02:31:15.084317 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.085175 kubelet[2700]: I0128 02:31:15.084342 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5-kubelet-dir\") pod \"csi-node-driver-9vjdx\" (UID: \"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5\") " pod="calico-system/csi-node-driver-9vjdx" Jan 28 02:31:15.085175 kubelet[2700]: E0128 02:31:15.084922 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.085175 kubelet[2700]: W0128 02:31:15.084938 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.085175 kubelet[2700]: E0128 02:31:15.084980 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.085541 containerd[1508]: time="2026-01-28T02:31:15.084033806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:31:15.085541 containerd[1508]: time="2026-01-28T02:31:15.084553958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:31:15.085541 containerd[1508]: time="2026-01-28T02:31:15.084581410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:15.088787 kubelet[2700]: E0128 02:31:15.086748 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.088787 kubelet[2700]: W0128 02:31:15.086843 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.088787 kubelet[2700]: E0128 02:31:15.086883 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.088989 containerd[1508]: time="2026-01-28T02:31:15.086344079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:15.089215 kubelet[2700]: E0128 02:31:15.089195 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.089479 kubelet[2700]: W0128 02:31:15.089311 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.089659 kubelet[2700]: E0128 02:31:15.089631 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.090067 kubelet[2700]: E0128 02:31:15.090039 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.090272 kubelet[2700]: W0128 02:31:15.090250 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.091176 kubelet[2700]: E0128 02:31:15.090544 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.091176 kubelet[2700]: I0128 02:31:15.090609 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5-registration-dir\") pod \"csi-node-driver-9vjdx\" (UID: \"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5\") " pod="calico-system/csi-node-driver-9vjdx" Jan 28 02:31:15.093302 kubelet[2700]: E0128 02:31:15.093057 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.093302 kubelet[2700]: W0128 02:31:15.093078 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.093302 kubelet[2700]: E0128 02:31:15.093260 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.094534 kubelet[2700]: E0128 02:31:15.094514 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.094816 kubelet[2700]: W0128 02:31:15.094642 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.094816 kubelet[2700]: E0128 02:31:15.094693 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.096908 kubelet[2700]: E0128 02:31:15.096887 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.097233 kubelet[2700]: W0128 02:31:15.097034 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.097233 kubelet[2700]: E0128 02:31:15.097070 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.098425 kubelet[2700]: E0128 02:31:15.098186 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.098425 kubelet[2700]: W0128 02:31:15.098207 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.098425 kubelet[2700]: E0128 02:31:15.098245 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.099171 kubelet[2700]: E0128 02:31:15.099117 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.099331 kubelet[2700]: W0128 02:31:15.099137 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.099331 kubelet[2700]: E0128 02:31:15.099278 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.100310 kubelet[2700]: E0128 02:31:15.100100 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.100310 kubelet[2700]: W0128 02:31:15.100122 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.100310 kubelet[2700]: E0128 02:31:15.100279 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.102434 kubelet[2700]: E0128 02:31:15.102400 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.102642 kubelet[2700]: W0128 02:31:15.102532 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.102642 kubelet[2700]: E0128 02:31:15.102561 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.104556 kubelet[2700]: E0128 02:31:15.104355 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.104556 kubelet[2700]: W0128 02:31:15.104374 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.104780 kubelet[2700]: E0128 02:31:15.104734 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.104780 kubelet[2700]: E0128 02:31:15.104857 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.104780 kubelet[2700]: W0128 02:31:15.104882 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.104780 kubelet[2700]: E0128 02:31:15.104907 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.105504 kubelet[2700]: E0128 02:31:15.105485 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.105637 kubelet[2700]: W0128 02:31:15.105616 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.105866 kubelet[2700]: E0128 02:31:15.105773 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.106495 kubelet[2700]: E0128 02:31:15.106359 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.106495 kubelet[2700]: W0128 02:31:15.106388 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.106495 kubelet[2700]: E0128 02:31:15.106405 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.141423 systemd[1]: Started cri-containerd-210e3d72ae21255818cceda12f15fb8e6323d1470b02662b16627aad01a251ba.scope - libcontainer container 210e3d72ae21255818cceda12f15fb8e6323d1470b02662b16627aad01a251ba. Jan 28 02:31:15.209386 kubelet[2700]: E0128 02:31:15.207761 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.209386 kubelet[2700]: W0128 02:31:15.207789 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.209386 kubelet[2700]: E0128 02:31:15.207833 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.209386 kubelet[2700]: E0128 02:31:15.208251 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.209386 kubelet[2700]: W0128 02:31:15.208266 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.209386 kubelet[2700]: E0128 02:31:15.208291 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.209386 kubelet[2700]: E0128 02:31:15.208681 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.209386 kubelet[2700]: W0128 02:31:15.208707 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.209386 kubelet[2700]: E0128 02:31:15.208729 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.210167 kubelet[2700]: E0128 02:31:15.210132 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.210637 kubelet[2700]: W0128 02:31:15.210258 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.210637 kubelet[2700]: E0128 02:31:15.210594 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.210637 kubelet[2700]: E0128 02:31:15.210598 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.210824 kubelet[2700]: W0128 02:31:15.210608 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.210824 kubelet[2700]: E0128 02:31:15.210674 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.211775 kubelet[2700]: E0128 02:31:15.211277 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.211775 kubelet[2700]: W0128 02:31:15.211310 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.211775 kubelet[2700]: E0128 02:31:15.211334 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.211775 kubelet[2700]: I0128 02:31:15.211361 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5-varrun\") pod \"csi-node-driver-9vjdx\" (UID: \"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5\") " pod="calico-system/csi-node-driver-9vjdx" Jan 28 02:31:15.214044 kubelet[2700]: E0128 02:31:15.213443 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.214044 kubelet[2700]: W0128 02:31:15.213460 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.214044 kubelet[2700]: E0128 02:31:15.213503 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.214487 kubelet[2700]: E0128 02:31:15.214467 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.214736 kubelet[2700]: W0128 02:31:15.214715 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.215691 kubelet[2700]: E0128 02:31:15.215002 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.215691 kubelet[2700]: E0128 02:31:15.215325 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.215691 kubelet[2700]: W0128 02:31:15.215340 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.215691 kubelet[2700]: E0128 02:31:15.215355 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.215691 kubelet[2700]: E0128 02:31:15.215609 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.215691 kubelet[2700]: W0128 02:31:15.215623 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.216469 kubelet[2700]: E0128 02:31:15.215638 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.217231 kubelet[2700]: E0128 02:31:15.217203 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.217231 kubelet[2700]: W0128 02:31:15.217227 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.217370 kubelet[2700]: E0128 02:31:15.217246 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.218315 kubelet[2700]: E0128 02:31:15.218289 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.218315 kubelet[2700]: W0128 02:31:15.218310 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.218772 kubelet[2700]: E0128 02:31:15.218478 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.218772 kubelet[2700]: I0128 02:31:15.218516 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66wbc\" (UniqueName: \"kubernetes.io/projected/7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5-kube-api-access-66wbc\") pod \"csi-node-driver-9vjdx\" (UID: \"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5\") " pod="calico-system/csi-node-driver-9vjdx" Jan 28 02:31:15.220396 kubelet[2700]: E0128 02:31:15.220339 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.220396 kubelet[2700]: W0128 02:31:15.220394 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.220800 kubelet[2700]: E0128 02:31:15.220656 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.222488 kubelet[2700]: E0128 02:31:15.221120 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.222488 kubelet[2700]: W0128 02:31:15.221230 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.222488 kubelet[2700]: E0128 02:31:15.221621 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.222488 kubelet[2700]: W0128 02:31:15.221636 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.222488 kubelet[2700]: E0128 02:31:15.222042 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.222488 kubelet[2700]: W0128 02:31:15.222057 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.222488 kubelet[2700]: E0128 02:31:15.222378 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.222488 kubelet[2700]: E0128 02:31:15.222398 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.222488 kubelet[2700]: E0128 02:31:15.222463 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.223011 kubelet[2700]: E0128 02:31:15.222591 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.223011 kubelet[2700]: W0128 02:31:15.222651 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.223011 kubelet[2700]: E0128 02:31:15.222672 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.223383 kubelet[2700]: E0128 02:31:15.223323 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.223383 kubelet[2700]: W0128 02:31:15.223382 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.223755 kubelet[2700]: E0128 02:31:15.223407 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.224213 kubelet[2700]: E0128 02:31:15.224192 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.224705 kubelet[2700]: W0128 02:31:15.224482 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.224705 kubelet[2700]: E0128 02:31:15.224519 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.225206 kubelet[2700]: E0128 02:31:15.224929 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.225332 kubelet[2700]: W0128 02:31:15.225309 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.225531 kubelet[2700]: E0128 02:31:15.225437 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.226610 kubelet[2700]: E0128 02:31:15.226580 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.226802 kubelet[2700]: W0128 02:31:15.226724 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.226802 kubelet[2700]: E0128 02:31:15.226752 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.238464 containerd[1508]: time="2026-01-28T02:31:15.238300899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69f7bb8b5-qc6qd,Uid:2e65a411-1cbc-46ba-86d1-c764af879c0a,Namespace:calico-system,Attempt:0,} returns sandbox id \"127a97bb8b5f64bd00b46183a0c52a3fdf967f5b2ec06887025ac46c00a97df6\"" Jan 28 02:31:15.243338 containerd[1508]: time="2026-01-28T02:31:15.243027561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47dzk,Uid:16e97c7a-7fae-44b9-b242-5844621f9c22,Namespace:calico-system,Attempt:0,} returns sandbox id \"210e3d72ae21255818cceda12f15fb8e6323d1470b02662b16627aad01a251ba\"" Jan 28 02:31:15.247915 containerd[1508]: time="2026-01-28T02:31:15.247840909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 02:31:15.320345 kubelet[2700]: E0128 02:31:15.319995 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.320345 kubelet[2700]: W0128 02:31:15.320027 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.320345 kubelet[2700]: E0128 02:31:15.320055 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.321093 kubelet[2700]: E0128 02:31:15.320845 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.321093 kubelet[2700]: W0128 02:31:15.320879 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.322555 kubelet[2700]: E0128 02:31:15.322027 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.323034 kubelet[2700]: E0128 02:31:15.322992 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.323034 kubelet[2700]: W0128 02:31:15.323010 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.323034 kubelet[2700]: E0128 02:31:15.323029 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.324773 kubelet[2700]: E0128 02:31:15.324317 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.324773 kubelet[2700]: W0128 02:31:15.324339 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.324773 kubelet[2700]: E0128 02:31:15.324358 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.326783 kubelet[2700]: E0128 02:31:15.325900 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.326783 kubelet[2700]: W0128 02:31:15.325922 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.326783 kubelet[2700]: E0128 02:31:15.325953 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.326783 kubelet[2700]: E0128 02:31:15.326766 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.326783 kubelet[2700]: W0128 02:31:15.326784 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.327059 kubelet[2700]: E0128 02:31:15.326800 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.328192 kubelet[2700]: E0128 02:31:15.327699 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.328192 kubelet[2700]: W0128 02:31:15.327720 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.328192 kubelet[2700]: E0128 02:31:15.327740 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.328665 kubelet[2700]: E0128 02:31:15.328500 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.328665 kubelet[2700]: W0128 02:31:15.328519 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.328665 kubelet[2700]: E0128 02:31:15.328536 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.329119 kubelet[2700]: E0128 02:31:15.328927 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.329119 kubelet[2700]: W0128 02:31:15.328961 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.329119 kubelet[2700]: E0128 02:31:15.328979 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.329435 kubelet[2700]: E0128 02:31:15.329414 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.329914 kubelet[2700]: W0128 02:31:15.329522 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.329914 kubelet[2700]: E0128 02:31:15.329546 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:15.350209 kubelet[2700]: E0128 02:31:15.350176 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:15.350467 kubelet[2700]: W0128 02:31:15.350391 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:15.350467 kubelet[2700]: E0128 02:31:15.350424 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:16.857889 kubelet[2700]: E0128 02:31:16.857669 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:16.988344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297129930.mount: Deactivated successfully. Jan 28 02:31:18.727458 containerd[1508]: time="2026-01-28T02:31:18.727386301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:18.729102 containerd[1508]: time="2026-01-28T02:31:18.729044341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 28 02:31:18.730411 containerd[1508]: time="2026-01-28T02:31:18.730368719Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:18.736249 containerd[1508]: time="2026-01-28T02:31:18.735697273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:18.736617 containerd[1508]: time="2026-01-28T02:31:18.736424097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.48852681s" Jan 28 02:31:18.736617 containerd[1508]: time="2026-01-28T02:31:18.736474065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 02:31:18.738631 containerd[1508]: time="2026-01-28T02:31:18.738600114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 02:31:18.779082 containerd[1508]: time="2026-01-28T02:31:18.778881966Z" level=info msg="CreateContainer within sandbox \"127a97bb8b5f64bd00b46183a0c52a3fdf967f5b2ec06887025ac46c00a97df6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 02:31:18.801083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864210488.mount: Deactivated successfully. Jan 28 02:31:18.805039 containerd[1508]: time="2026-01-28T02:31:18.804879544Z" level=info msg="CreateContainer within sandbox \"127a97bb8b5f64bd00b46183a0c52a3fdf967f5b2ec06887025ac46c00a97df6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7d0f6849706c214db9ddfc5efe8feede4baad14299250e423c09322c9a20d783\"" Jan 28 02:31:18.807421 containerd[1508]: time="2026-01-28T02:31:18.807389065Z" level=info msg="StartContainer for \"7d0f6849706c214db9ddfc5efe8feede4baad14299250e423c09322c9a20d783\"" Jan 28 02:31:18.854736 kubelet[2700]: E0128 02:31:18.853859 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:18.880353 systemd[1]: Started cri-containerd-7d0f6849706c214db9ddfc5efe8feede4baad14299250e423c09322c9a20d783.scope - libcontainer container 7d0f6849706c214db9ddfc5efe8feede4baad14299250e423c09322c9a20d783. Jan 28 02:31:18.985958 containerd[1508]: time="2026-01-28T02:31:18.984793229Z" level=info msg="StartContainer for \"7d0f6849706c214db9ddfc5efe8feede4baad14299250e423c09322c9a20d783\" returns successfully" Jan 28 02:31:19.135113 kubelet[2700]: E0128 02:31:19.135039 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.135113 kubelet[2700]: W0128 02:31:19.135111 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.136089 kubelet[2700]: E0128 02:31:19.135216 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.136089 kubelet[2700]: E0128 02:31:19.135687 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.136089 kubelet[2700]: W0128 02:31:19.135702 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.136089 kubelet[2700]: E0128 02:31:19.135718 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.136089 kubelet[2700]: E0128 02:31:19.136050 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.136089 kubelet[2700]: W0128 02:31:19.136082 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.137377 kubelet[2700]: E0128 02:31:19.136100 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.137377 kubelet[2700]: E0128 02:31:19.136901 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.137377 kubelet[2700]: W0128 02:31:19.136918 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.137377 kubelet[2700]: E0128 02:31:19.136935 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.138216 kubelet[2700]: E0128 02:31:19.137745 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.138216 kubelet[2700]: W0128 02:31:19.137778 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.138216 kubelet[2700]: E0128 02:31:19.137803 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.138941 kubelet[2700]: E0128 02:31:19.138466 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.138941 kubelet[2700]: W0128 02:31:19.138491 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.138941 kubelet[2700]: E0128 02:31:19.138539 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.141257 kubelet[2700]: E0128 02:31:19.139259 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.141257 kubelet[2700]: W0128 02:31:19.139311 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.141257 kubelet[2700]: E0128 02:31:19.139328 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.141257 kubelet[2700]: E0128 02:31:19.140453 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.141257 kubelet[2700]: W0128 02:31:19.140468 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.141257 kubelet[2700]: E0128 02:31:19.140484 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.141257 kubelet[2700]: E0128 02:31:19.140917 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.141257 kubelet[2700]: W0128 02:31:19.141059 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.141257 kubelet[2700]: E0128 02:31:19.141082 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.142354 kubelet[2700]: E0128 02:31:19.142080 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.144169 kubelet[2700]: W0128 02:31:19.143011 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.144169 kubelet[2700]: E0128 02:31:19.143047 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.144169 kubelet[2700]: E0128 02:31:19.143454 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.144169 kubelet[2700]: W0128 02:31:19.143469 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.144169 kubelet[2700]: E0128 02:31:19.143522 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.144169 kubelet[2700]: E0128 02:31:19.143891 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.144169 kubelet[2700]: W0128 02:31:19.143905 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.144169 kubelet[2700]: E0128 02:31:19.143921 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.145165 kubelet[2700]: E0128 02:31:19.144898 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.145165 kubelet[2700]: W0128 02:31:19.144921 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.145165 kubelet[2700]: E0128 02:31:19.144938 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.149174 kubelet[2700]: E0128 02:31:19.146841 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.149174 kubelet[2700]: W0128 02:31:19.146930 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.149174 kubelet[2700]: E0128 02:31:19.146950 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.149174 kubelet[2700]: E0128 02:31:19.147273 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.149174 kubelet[2700]: W0128 02:31:19.147288 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.149174 kubelet[2700]: E0128 02:31:19.147303 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.152184 kubelet[2700]: E0128 02:31:19.151726 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.152184 kubelet[2700]: W0128 02:31:19.152053 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.152184 kubelet[2700]: E0128 02:31:19.152076 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.153123 kubelet[2700]: E0128 02:31:19.152551 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.153123 kubelet[2700]: W0128 02:31:19.152566 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.153123 kubelet[2700]: E0128 02:31:19.152860 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.156300 kubelet[2700]: E0128 02:31:19.156047 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.156300 kubelet[2700]: W0128 02:31:19.156070 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.156300 kubelet[2700]: E0128 02:31:19.156095 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.158272 kubelet[2700]: E0128 02:31:19.157572 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.158272 kubelet[2700]: W0128 02:31:19.157594 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.158272 kubelet[2700]: E0128 02:31:19.157757 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.162307 kubelet[2700]: E0128 02:31:19.162281 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.162307 kubelet[2700]: W0128 02:31:19.162303 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.162593 kubelet[2700]: E0128 02:31:19.162477 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.163263 kubelet[2700]: E0128 02:31:19.163237 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.163263 kubelet[2700]: W0128 02:31:19.163257 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.165447 kubelet[2700]: E0128 02:31:19.165410 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.165663 kubelet[2700]: E0128 02:31:19.165521 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.165663 kubelet[2700]: W0128 02:31:19.165555 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.165781 kubelet[2700]: E0128 02:31:19.165680 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.166591 kubelet[2700]: E0128 02:31:19.166548 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.166591 kubelet[2700]: W0128 02:31:19.166569 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.167076 kubelet[2700]: E0128 02:31:19.167016 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.168980 kubelet[2700]: E0128 02:31:19.168488 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.168980 kubelet[2700]: W0128 02:31:19.168510 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.168980 kubelet[2700]: E0128 02:31:19.168697 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.170167 kubelet[2700]: E0128 02:31:19.169558 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.170167 kubelet[2700]: W0128 02:31:19.169580 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.170167 kubelet[2700]: E0128 02:31:19.169621 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.171883 kubelet[2700]: E0128 02:31:19.171417 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.171883 kubelet[2700]: W0128 02:31:19.171437 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.172498 kubelet[2700]: E0128 02:31:19.171495 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.172498 kubelet[2700]: E0128 02:31:19.172372 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.173582 kubelet[2700]: W0128 02:31:19.172728 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.173582 kubelet[2700]: E0128 02:31:19.173421 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.174092 kubelet[2700]: E0128 02:31:19.174068 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.174092 kubelet[2700]: W0128 02:31:19.174089 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.174658 kubelet[2700]: E0128 02:31:19.174631 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.177170 kubelet[2700]: E0128 02:31:19.177123 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.177170 kubelet[2700]: W0128 02:31:19.177169 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.177312 kubelet[2700]: E0128 02:31:19.177247 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.177645 kubelet[2700]: E0128 02:31:19.177612 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.177645 kubelet[2700]: W0128 02:31:19.177631 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.177764 kubelet[2700]: E0128 02:31:19.177671 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.178837 kubelet[2700]: E0128 02:31:19.178808 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.178942 kubelet[2700]: W0128 02:31:19.178839 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.178942 kubelet[2700]: E0128 02:31:19.178870 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.183266 kubelet[2700]: E0128 02:31:19.182760 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.183578 kubelet[2700]: W0128 02:31:19.183545 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.185668 kubelet[2700]: E0128 02:31:19.185641 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:19.186404 kubelet[2700]: E0128 02:31:19.186370 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:19.186404 kubelet[2700]: W0128 02:31:19.186391 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:19.186560 kubelet[2700]: E0128 02:31:19.186408 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.091336 kubelet[2700]: I0128 02:31:20.091117 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69f7bb8b5-qc6qd" podStartSLOduration=2.598984243 podStartE2EDuration="6.091039479s" podCreationTimestamp="2026-01-28 02:31:14 +0000 UTC" firstStartedPulling="2026-01-28 02:31:15.245918479 +0000 UTC m=+36.592466001" lastFinishedPulling="2026-01-28 02:31:18.737973707 +0000 UTC m=+40.084521237" observedRunningTime="2026-01-28 02:31:19.081480251 +0000 UTC m=+40.428027785" watchObservedRunningTime="2026-01-28 02:31:20.091039479 +0000 UTC m=+41.437587010" Jan 28 02:31:20.154616 kubelet[2700]: E0128 02:31:20.154549 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.154616 kubelet[2700]: W0128 02:31:20.154600 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.154937 kubelet[2700]: E0128 02:31:20.154632 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.154937 kubelet[2700]: E0128 02:31:20.154909 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.154937 kubelet[2700]: W0128 02:31:20.154922 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.155090 kubelet[2700]: E0128 02:31:20.154937 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.156712 kubelet[2700]: E0128 02:31:20.155192 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.156712 kubelet[2700]: W0128 02:31:20.155205 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.156712 kubelet[2700]: E0128 02:31:20.155220 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.156712 kubelet[2700]: E0128 02:31:20.155528 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.156712 kubelet[2700]: W0128 02:31:20.155542 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.156712 kubelet[2700]: E0128 02:31:20.155599 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.156712 kubelet[2700]: E0128 02:31:20.155919 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.156712 kubelet[2700]: W0128 02:31:20.155940 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.156712 kubelet[2700]: E0128 02:31:20.155981 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.156712 kubelet[2700]: E0128 02:31:20.156344 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.159759 kubelet[2700]: W0128 02:31:20.156360 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.159759 kubelet[2700]: E0128 02:31:20.156387 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.159759 kubelet[2700]: E0128 02:31:20.156633 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.159759 kubelet[2700]: W0128 02:31:20.156646 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.159759 kubelet[2700]: E0128 02:31:20.156672 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.159759 kubelet[2700]: E0128 02:31:20.156950 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.159759 kubelet[2700]: W0128 02:31:20.156964 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.159759 kubelet[2700]: E0128 02:31:20.156979 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.159759 kubelet[2700]: E0128 02:31:20.157282 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.159759 kubelet[2700]: W0128 02:31:20.157414 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.160775 kubelet[2700]: E0128 02:31:20.157458 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.160775 kubelet[2700]: E0128 02:31:20.158604 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.160775 kubelet[2700]: W0128 02:31:20.158620 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.160775 kubelet[2700]: E0128 02:31:20.158726 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.160775 kubelet[2700]: E0128 02:31:20.159123 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.160775 kubelet[2700]: W0128 02:31:20.159138 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.160775 kubelet[2700]: E0128 02:31:20.159239 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.160775 kubelet[2700]: E0128 02:31:20.159937 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.160775 kubelet[2700]: W0128 02:31:20.159951 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.160775 kubelet[2700]: E0128 02:31:20.159967 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.161848 kubelet[2700]: E0128 02:31:20.160263 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.161848 kubelet[2700]: W0128 02:31:20.160277 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.161848 kubelet[2700]: E0128 02:31:20.160298 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.161848 kubelet[2700]: E0128 02:31:20.160702 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.161848 kubelet[2700]: W0128 02:31:20.160715 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.161848 kubelet[2700]: E0128 02:31:20.160732 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.161848 kubelet[2700]: E0128 02:31:20.161325 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.161848 kubelet[2700]: W0128 02:31:20.161340 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.161848 kubelet[2700]: E0128 02:31:20.161355 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.170625 kubelet[2700]: E0128 02:31:20.170586 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.171078 kubelet[2700]: W0128 02:31:20.170841 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.171078 kubelet[2700]: E0128 02:31:20.170878 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.171798 kubelet[2700]: E0128 02:31:20.171536 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.171798 kubelet[2700]: W0128 02:31:20.171555 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.171798 kubelet[2700]: E0128 02:31:20.171585 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.172220 kubelet[2700]: E0128 02:31:20.172114 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.172388 kubelet[2700]: W0128 02:31:20.172366 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.172724 kubelet[2700]: E0128 02:31:20.172518 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.173032 kubelet[2700]: E0128 02:31:20.173012 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.173154 kubelet[2700]: W0128 02:31:20.173121 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.173462 kubelet[2700]: E0128 02:31:20.173315 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.173657 kubelet[2700]: E0128 02:31:20.173638 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.173764 kubelet[2700]: W0128 02:31:20.173744 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.174006 kubelet[2700]: E0128 02:31:20.173911 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.174402 kubelet[2700]: E0128 02:31:20.174209 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.174402 kubelet[2700]: W0128 02:31:20.174239 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.174402 kubelet[2700]: E0128 02:31:20.174302 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.174728 kubelet[2700]: E0128 02:31:20.174708 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.174977 kubelet[2700]: W0128 02:31:20.174800 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.174977 kubelet[2700]: E0128 02:31:20.174847 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.175246 kubelet[2700]: E0128 02:31:20.175227 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.175411 kubelet[2700]: W0128 02:31:20.175389 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.175587 kubelet[2700]: E0128 02:31:20.175566 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.176384 kubelet[2700]: E0128 02:31:20.176126 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.176384 kubelet[2700]: W0128 02:31:20.176343 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.176384 kubelet[2700]: E0128 02:31:20.176372 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.176932 kubelet[2700]: E0128 02:31:20.176907 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.176932 kubelet[2700]: W0128 02:31:20.176929 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.177249 kubelet[2700]: E0128 02:31:20.177123 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.177408 kubelet[2700]: E0128 02:31:20.177387 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.177408 kubelet[2700]: W0128 02:31:20.177407 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.177597 kubelet[2700]: E0128 02:31:20.177554 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.177760 kubelet[2700]: E0128 02:31:20.177736 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.177760 kubelet[2700]: W0128 02:31:20.177760 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.178195 kubelet[2700]: E0128 02:31:20.177955 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.178195 kubelet[2700]: E0128 02:31:20.178056 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.178195 kubelet[2700]: W0128 02:31:20.178071 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.178195 kubelet[2700]: E0128 02:31:20.178087 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.179527 kubelet[2700]: E0128 02:31:20.178836 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.179527 kubelet[2700]: W0128 02:31:20.178854 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.179527 kubelet[2700]: E0128 02:31:20.178870 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.179690 kubelet[2700]: E0128 02:31:20.179661 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.179690 kubelet[2700]: W0128 02:31:20.179677 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.179786 kubelet[2700]: E0128 02:31:20.179727 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.180115 kubelet[2700]: E0128 02:31:20.180080 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.180115 kubelet[2700]: W0128 02:31:20.180113 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.180331 kubelet[2700]: E0128 02:31:20.180301 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.180773 kubelet[2700]: E0128 02:31:20.180739 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.180773 kubelet[2700]: W0128 02:31:20.180763 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.181030 kubelet[2700]: E0128 02:31:20.180787 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.181203 kubelet[2700]: E0128 02:31:20.181181 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:31:20.181355 kubelet[2700]: W0128 02:31:20.181279 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:31:20.181355 kubelet[2700]: E0128 02:31:20.181316 2700 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:31:20.500430 containerd[1508]: time="2026-01-28T02:31:20.500293741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:20.504576 containerd[1508]: time="2026-01-28T02:31:20.504505437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 28 02:31:20.732560 containerd[1508]: time="2026-01-28T02:31:20.732037669Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:20.749616 containerd[1508]: time="2026-01-28T02:31:20.749557429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:20.750599 containerd[1508]: time="2026-01-28T02:31:20.750431973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.011669096s" Jan 28 02:31:20.750599 containerd[1508]: time="2026-01-28T02:31:20.750496683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 02:31:20.754684 containerd[1508]: time="2026-01-28T02:31:20.754648706Z" level=info msg="CreateContainer within sandbox \"210e3d72ae21255818cceda12f15fb8e6323d1470b02662b16627aad01a251ba\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 02:31:20.827293 containerd[1508]: time="2026-01-28T02:31:20.818816994Z" level=info msg="CreateContainer within sandbox \"210e3d72ae21255818cceda12f15fb8e6323d1470b02662b16627aad01a251ba\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"821a72fccc8ee18f3b02964391f7e12390dda22c5f2b2f500123b18700697453\"" Jan 28 02:31:20.829228 containerd[1508]: time="2026-01-28T02:31:20.828815122Z" level=info msg="StartContainer for \"821a72fccc8ee18f3b02964391f7e12390dda22c5f2b2f500123b18700697453\"" Jan 28 02:31:20.854839 kubelet[2700]: E0128 02:31:20.854756 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:20.904470 systemd[1]: Started cri-containerd-821a72fccc8ee18f3b02964391f7e12390dda22c5f2b2f500123b18700697453.scope - libcontainer container 821a72fccc8ee18f3b02964391f7e12390dda22c5f2b2f500123b18700697453. Jan 28 02:31:20.984782 containerd[1508]: time="2026-01-28T02:31:20.984679257Z" level=info msg="StartContainer for \"821a72fccc8ee18f3b02964391f7e12390dda22c5f2b2f500123b18700697453\" returns successfully" Jan 28 02:31:21.004914 systemd[1]: cri-containerd-821a72fccc8ee18f3b02964391f7e12390dda22c5f2b2f500123b18700697453.scope: Deactivated successfully. Jan 28 02:31:21.050062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-821a72fccc8ee18f3b02964391f7e12390dda22c5f2b2f500123b18700697453-rootfs.mount: Deactivated successfully. Jan 28 02:31:21.102346 containerd[1508]: time="2026-01-28T02:31:21.056886855Z" level=info msg="shim disconnected" id=821a72fccc8ee18f3b02964391f7e12390dda22c5f2b2f500123b18700697453 namespace=k8s.io Jan 28 02:31:21.102563 containerd[1508]: time="2026-01-28T02:31:21.102353842Z" level=warning msg="cleaning up after shim disconnected" id=821a72fccc8ee18f3b02964391f7e12390dda22c5f2b2f500123b18700697453 namespace=k8s.io Jan 28 02:31:21.102563 containerd[1508]: time="2026-01-28T02:31:21.102391473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 02:31:22.073380 containerd[1508]: time="2026-01-28T02:31:22.073329501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 02:31:22.857071 kubelet[2700]: E0128 02:31:22.856518 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:24.854892 kubelet[2700]: E0128 02:31:24.853648 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:26.855423 kubelet[2700]: E0128 02:31:26.853441 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:27.275181 containerd[1508]: time="2026-01-28T02:31:27.272921535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:27.277940 containerd[1508]: time="2026-01-28T02:31:27.277881308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 28 02:31:27.279427 containerd[1508]: time="2026-01-28T02:31:27.279368168Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:27.280861 containerd[1508]: time="2026-01-28T02:31:27.280680099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.20729233s" Jan 28 02:31:27.280861 containerd[1508]: time="2026-01-28T02:31:27.280733958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 02:31:27.281687 containerd[1508]: time="2026-01-28T02:31:27.281433618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:27.287277 containerd[1508]: time="2026-01-28T02:31:27.286230383Z" level=info msg="CreateContainer within sandbox \"210e3d72ae21255818cceda12f15fb8e6323d1470b02662b16627aad01a251ba\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 02:31:27.312782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921946943.mount: Deactivated successfully. Jan 28 02:31:27.314475 containerd[1508]: time="2026-01-28T02:31:27.312770645Z" level=info msg="CreateContainer within sandbox \"210e3d72ae21255818cceda12f15fb8e6323d1470b02662b16627aad01a251ba\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a71bc5c6ee743570b6963c825a7771edef1bed8ad6821a29a898d3397ae343d6\"" Jan 28 02:31:27.316164 containerd[1508]: time="2026-01-28T02:31:27.316105793Z" level=info msg="StartContainer for \"a71bc5c6ee743570b6963c825a7771edef1bed8ad6821a29a898d3397ae343d6\"" Jan 28 02:31:27.386776 systemd[1]: run-containerd-runc-k8s.io-a71bc5c6ee743570b6963c825a7771edef1bed8ad6821a29a898d3397ae343d6-runc.gZcmaM.mount: Deactivated successfully. Jan 28 02:31:27.400453 systemd[1]: Started cri-containerd-a71bc5c6ee743570b6963c825a7771edef1bed8ad6821a29a898d3397ae343d6.scope - libcontainer container a71bc5c6ee743570b6963c825a7771edef1bed8ad6821a29a898d3397ae343d6. Jan 28 02:31:27.448583 containerd[1508]: time="2026-01-28T02:31:27.448531925Z" level=info msg="StartContainer for \"a71bc5c6ee743570b6963c825a7771edef1bed8ad6821a29a898d3397ae343d6\" returns successfully" Jan 28 02:31:28.743773 systemd[1]: cri-containerd-a71bc5c6ee743570b6963c825a7771edef1bed8ad6821a29a898d3397ae343d6.scope: Deactivated successfully. Jan 28 02:31:28.805615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a71bc5c6ee743570b6963c825a7771edef1bed8ad6821a29a898d3397ae343d6-rootfs.mount: Deactivated successfully. Jan 28 02:31:28.809018 containerd[1508]: time="2026-01-28T02:31:28.808914421Z" level=info msg="shim disconnected" id=a71bc5c6ee743570b6963c825a7771edef1bed8ad6821a29a898d3397ae343d6 namespace=k8s.io Jan 28 02:31:28.809822 containerd[1508]: time="2026-01-28T02:31:28.809027318Z" level=warning msg="cleaning up after shim disconnected" id=a71bc5c6ee743570b6963c825a7771edef1bed8ad6821a29a898d3397ae343d6 namespace=k8s.io Jan 28 02:31:28.809822 containerd[1508]: time="2026-01-28T02:31:28.809053144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 02:31:28.854139 kubelet[2700]: E0128 02:31:28.853604 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:28.861257 kubelet[2700]: I0128 02:31:28.861199 2700 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 02:31:29.046349 kubelet[2700]: I0128 02:31:29.045313 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjjp\" (UniqueName: \"kubernetes.io/projected/cd42b56d-5021-410e-8408-e15b3c52f065-kube-api-access-dzjjp\") pod \"calico-apiserver-7866ff566b-tbgpj\" (UID: \"cd42b56d-5021-410e-8408-e15b3c52f065\") " pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" Jan 28 02:31:29.046349 kubelet[2700]: I0128 02:31:29.045377 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z69r7\" (UniqueName: \"kubernetes.io/projected/bc7a2646-8a27-4b05-8c51-22c9804a41de-kube-api-access-z69r7\") pod \"coredns-668d6bf9bc-b4mnx\" (UID: \"bc7a2646-8a27-4b05-8c51-22c9804a41de\") " pod="kube-system/coredns-668d6bf9bc-b4mnx" Jan 28 02:31:29.046349 kubelet[2700]: I0128 02:31:29.045409 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb78a5cb-4de2-4536-925b-fdddfbef361f-config-volume\") pod \"coredns-668d6bf9bc-mqzws\" (UID: \"cb78a5cb-4de2-4536-925b-fdddfbef361f\") " pod="kube-system/coredns-668d6bf9bc-mqzws" Jan 28 02:31:29.046349 kubelet[2700]: I0128 02:31:29.045437 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5szp6\" (UniqueName: \"kubernetes.io/projected/0a6be4a3-a931-4bdf-98fa-3be5929a5064-kube-api-access-5szp6\") pod \"calico-apiserver-7866ff566b-wrtzl\" (UID: \"0a6be4a3-a931-4bdf-98fa-3be5929a5064\") " pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" Jan 28 02:31:29.046349 kubelet[2700]: I0128 02:31:29.045463 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/345054d8-51ec-4ec2-90c7-329ebe97ba46-config\") pod \"goldmane-666569f655-bvmzd\" (UID: \"345054d8-51ec-4ec2-90c7-329ebe97ba46\") " pod="calico-system/goldmane-666569f655-bvmzd" Jan 28 02:31:29.047082 kubelet[2700]: I0128 02:31:29.045497 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0a6be4a3-a931-4bdf-98fa-3be5929a5064-calico-apiserver-certs\") pod \"calico-apiserver-7866ff566b-wrtzl\" (UID: \"0a6be4a3-a931-4bdf-98fa-3be5929a5064\") " pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" Jan 28 02:31:29.047082 kubelet[2700]: I0128 02:31:29.045527 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/345054d8-51ec-4ec2-90c7-329ebe97ba46-goldmane-key-pair\") pod \"goldmane-666569f655-bvmzd\" (UID: \"345054d8-51ec-4ec2-90c7-329ebe97ba46\") " pod="calico-system/goldmane-666569f655-bvmzd" Jan 28 02:31:29.047082 kubelet[2700]: I0128 02:31:29.045554 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc7a2646-8a27-4b05-8c51-22c9804a41de-config-volume\") pod \"coredns-668d6bf9bc-b4mnx\" (UID: \"bc7a2646-8a27-4b05-8c51-22c9804a41de\") " pod="kube-system/coredns-668d6bf9bc-b4mnx" Jan 28 02:31:29.047082 kubelet[2700]: I0128 02:31:29.045595 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtr6m\" (UniqueName: \"kubernetes.io/projected/cb78a5cb-4de2-4536-925b-fdddfbef361f-kube-api-access-gtr6m\") pod \"coredns-668d6bf9bc-mqzws\" (UID: \"cb78a5cb-4de2-4536-925b-fdddfbef361f\") " pod="kube-system/coredns-668d6bf9bc-mqzws" Jan 28 02:31:29.047082 kubelet[2700]: I0128 02:31:29.045628 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df7937d-1785-494d-97a4-262107c3cdf6-whisker-ca-bundle\") pod \"whisker-7f9cc9d84b-4zj2q\" (UID: \"7df7937d-1785-494d-97a4-262107c3cdf6\") " pod="calico-system/whisker-7f9cc9d84b-4zj2q" Jan 28 02:31:29.049390 kubelet[2700]: I0128 02:31:29.045677 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7df7937d-1785-494d-97a4-262107c3cdf6-whisker-backend-key-pair\") pod \"whisker-7f9cc9d84b-4zj2q\" (UID: \"7df7937d-1785-494d-97a4-262107c3cdf6\") " pod="calico-system/whisker-7f9cc9d84b-4zj2q" Jan 28 02:31:29.049390 kubelet[2700]: I0128 02:31:29.045716 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd42b56d-5021-410e-8408-e15b3c52f065-calico-apiserver-certs\") pod \"calico-apiserver-7866ff566b-tbgpj\" (UID: \"cd42b56d-5021-410e-8408-e15b3c52f065\") " pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" Jan 28 02:31:29.049390 kubelet[2700]: I0128 02:31:29.045755 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mfr\" (UniqueName: \"kubernetes.io/projected/19aa6a03-3b76-49c3-840d-da43872b111b-kube-api-access-p5mfr\") pod \"calico-kube-controllers-858bccccf6-bqm86\" (UID: \"19aa6a03-3b76-49c3-840d-da43872b111b\") " pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" Jan 28 02:31:29.049390 kubelet[2700]: I0128 02:31:29.045798 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kghlk\" (UniqueName: \"kubernetes.io/projected/7df7937d-1785-494d-97a4-262107c3cdf6-kube-api-access-kghlk\") pod \"whisker-7f9cc9d84b-4zj2q\" (UID: \"7df7937d-1785-494d-97a4-262107c3cdf6\") " pod="calico-system/whisker-7f9cc9d84b-4zj2q" Jan 28 02:31:29.049390 kubelet[2700]: I0128 02:31:29.045832 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19aa6a03-3b76-49c3-840d-da43872b111b-tigera-ca-bundle\") pod \"calico-kube-controllers-858bccccf6-bqm86\" (UID: \"19aa6a03-3b76-49c3-840d-da43872b111b\") " pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" Jan 28 02:31:29.051743 kubelet[2700]: I0128 02:31:29.045868 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/345054d8-51ec-4ec2-90c7-329ebe97ba46-goldmane-ca-bundle\") pod \"goldmane-666569f655-bvmzd\" (UID: \"345054d8-51ec-4ec2-90c7-329ebe97ba46\") " pod="calico-system/goldmane-666569f655-bvmzd" Jan 28 02:31:29.051743 kubelet[2700]: I0128 02:31:29.045899 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8zps\" (UniqueName: \"kubernetes.io/projected/345054d8-51ec-4ec2-90c7-329ebe97ba46-kube-api-access-z8zps\") pod \"goldmane-666569f655-bvmzd\" (UID: \"345054d8-51ec-4ec2-90c7-329ebe97ba46\") " pod="calico-system/goldmane-666569f655-bvmzd" Jan 28 02:31:29.130127 containerd[1508]: time="2026-01-28T02:31:29.129614275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 02:31:29.142408 systemd[1]: Created slice kubepods-burstable-podbc7a2646_8a27_4b05_8c51_22c9804a41de.slice - libcontainer container kubepods-burstable-podbc7a2646_8a27_4b05_8c51_22c9804a41de.slice. Jan 28 02:31:29.151018 systemd[1]: Created slice kubepods-besteffort-pod7df7937d_1785_494d_97a4_262107c3cdf6.slice - libcontainer container kubepods-besteffort-pod7df7937d_1785_494d_97a4_262107c3cdf6.slice. Jan 28 02:31:29.240070 systemd[1]: Created slice kubepods-besteffort-pod19aa6a03_3b76_49c3_840d_da43872b111b.slice - libcontainer container kubepods-besteffort-pod19aa6a03_3b76_49c3_840d_da43872b111b.slice. Jan 28 02:31:29.259254 systemd[1]: Created slice kubepods-burstable-podcb78a5cb_4de2_4536_925b_fdddfbef361f.slice - libcontainer container kubepods-burstable-podcb78a5cb_4de2_4536_925b_fdddfbef361f.slice. Jan 28 02:31:29.264192 containerd[1508]: time="2026-01-28T02:31:29.263006412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bccccf6-bqm86,Uid:19aa6a03-3b76-49c3-840d-da43872b111b,Namespace:calico-system,Attempt:0,}" Jan 28 02:31:29.313066 systemd[1]: Created slice kubepods-besteffort-pod0a6be4a3_a931_4bdf_98fa_3be5929a5064.slice - libcontainer container kubepods-besteffort-pod0a6be4a3_a931_4bdf_98fa_3be5929a5064.slice. Jan 28 02:31:29.334130 containerd[1508]: time="2026-01-28T02:31:29.333375387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7866ff566b-wrtzl,Uid:0a6be4a3-a931-4bdf-98fa-3be5929a5064,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:31:29.340218 systemd[1]: Created slice kubepods-besteffort-pod345054d8_51ec_4ec2_90c7_329ebe97ba46.slice - libcontainer container kubepods-besteffort-pod345054d8_51ec_4ec2_90c7_329ebe97ba46.slice. Jan 28 02:31:29.378939 systemd[1]: Created slice kubepods-besteffort-podcd42b56d_5021_410e_8408_e15b3c52f065.slice - libcontainer container kubepods-besteffort-podcd42b56d_5021_410e_8408_e15b3c52f065.slice. Jan 28 02:31:29.382692 containerd[1508]: time="2026-01-28T02:31:29.381500141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bvmzd,Uid:345054d8-51ec-4ec2-90c7-329ebe97ba46,Namespace:calico-system,Attempt:0,}" Jan 28 02:31:29.401938 containerd[1508]: time="2026-01-28T02:31:29.401176560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7866ff566b-tbgpj,Uid:cd42b56d-5021-410e-8408-e15b3c52f065,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:31:29.505600 containerd[1508]: time="2026-01-28T02:31:29.505555972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b4mnx,Uid:bc7a2646-8a27-4b05-8c51-22c9804a41de,Namespace:kube-system,Attempt:0,}" Jan 28 02:31:29.509072 containerd[1508]: time="2026-01-28T02:31:29.509039536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f9cc9d84b-4zj2q,Uid:7df7937d-1785-494d-97a4-262107c3cdf6,Namespace:calico-system,Attempt:0,}" Jan 28 02:31:29.594202 containerd[1508]: time="2026-01-28T02:31:29.593299846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mqzws,Uid:cb78a5cb-4de2-4536-925b-fdddfbef361f,Namespace:kube-system,Attempt:0,}" Jan 28 02:31:29.982368 containerd[1508]: time="2026-01-28T02:31:29.982290502Z" level=error msg="Failed to destroy network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:29.989430 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d-shm.mount: Deactivated successfully. Jan 28 02:31:29.991528 containerd[1508]: time="2026-01-28T02:31:29.990577715Z" level=error msg="Failed to destroy network for sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:29.995708 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193-shm.mount: Deactivated successfully. Jan 28 02:31:30.008356 containerd[1508]: time="2026-01-28T02:31:30.008027934Z" level=error msg="encountered an error cleaning up failed sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.008356 containerd[1508]: time="2026-01-28T02:31:30.008205484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7866ff566b-wrtzl,Uid:0a6be4a3-a931-4bdf-98fa-3be5929a5064,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.010170 containerd[1508]: time="2026-01-28T02:31:30.009750128Z" level=error msg="encountered an error cleaning up failed sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.010170 containerd[1508]: time="2026-01-28T02:31:30.009835878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bccccf6-bqm86,Uid:19aa6a03-3b76-49c3-840d-da43872b111b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.018361 containerd[1508]: time="2026-01-28T02:31:30.018297649Z" level=error msg="Failed to destroy network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.021178 containerd[1508]: time="2026-01-28T02:31:30.020401941Z" level=error msg="Failed to destroy network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.025300 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1-shm.mount: Deactivated successfully. Jan 28 02:31:30.025772 containerd[1508]: time="2026-01-28T02:31:30.025613711Z" level=error msg="Failed to destroy network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.029282 containerd[1508]: time="2026-01-28T02:31:30.019297891Z" level=error msg="Failed to destroy network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.031946 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa-shm.mount: Deactivated successfully. Jan 28 02:31:30.032374 containerd[1508]: time="2026-01-28T02:31:30.032319323Z" level=error msg="Failed to destroy network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.032690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2-shm.mount: Deactivated successfully. Jan 28 02:31:30.033766 containerd[1508]: time="2026-01-28T02:31:30.033528838Z" level=error msg="encountered an error cleaning up failed sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.033766 containerd[1508]: time="2026-01-28T02:31:30.033603802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f9cc9d84b-4zj2q,Uid:7df7937d-1785-494d-97a4-262107c3cdf6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.035181 containerd[1508]: time="2026-01-28T02:31:30.034407704Z" level=error msg="encountered an error cleaning up failed sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.035181 containerd[1508]: time="2026-01-28T02:31:30.034469583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mqzws,Uid:cb78a5cb-4de2-4536-925b-fdddfbef361f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.035880 containerd[1508]: time="2026-01-28T02:31:30.035842171Z" level=error msg="encountered an error cleaning up failed sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.036217 containerd[1508]: time="2026-01-28T02:31:30.036179715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b4mnx,Uid:bc7a2646-8a27-4b05-8c51-22c9804a41de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.036540 containerd[1508]: time="2026-01-28T02:31:30.036385266Z" level=error msg="encountered an error cleaning up failed sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.036540 containerd[1508]: time="2026-01-28T02:31:30.036449518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bvmzd,Uid:345054d8-51ec-4ec2-90c7-329ebe97ba46,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.038222 containerd[1508]: time="2026-01-28T02:31:30.038069937Z" level=error msg="encountered an error cleaning up failed sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.038327 containerd[1508]: time="2026-01-28T02:31:30.038190638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7866ff566b-tbgpj,Uid:cd42b56d-5021-410e-8408-e15b3c52f065,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.041099 kubelet[2700]: E0128 02:31:30.013757 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.041881 kubelet[2700]: E0128 02:31:30.013839 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.044462 kubelet[2700]: E0128 02:31:30.043053 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" Jan 28 02:31:30.044462 kubelet[2700]: E0128 02:31:30.043118 2700 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" Jan 28 02:31:30.044462 kubelet[2700]: E0128 02:31:30.043236 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7866ff566b-wrtzl_calico-apiserver(0a6be4a3-a931-4bdf-98fa-3be5929a5064)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7866ff566b-wrtzl_calico-apiserver(0a6be4a3-a931-4bdf-98fa-3be5929a5064)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:31:30.044980 kubelet[2700]: E0128 02:31:30.044947 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.045657 kubelet[2700]: E0128 02:31:30.045112 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" Jan 28 02:31:30.045657 kubelet[2700]: E0128 02:31:30.045164 2700 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" Jan 28 02:31:30.045657 kubelet[2700]: E0128 02:31:30.045212 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7866ff566b-tbgpj_calico-apiserver(cd42b56d-5021-410e-8408-e15b3c52f065)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7866ff566b-tbgpj_calico-apiserver(cd42b56d-5021-410e-8408-e15b3c52f065)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:31:30.045878 kubelet[2700]: E0128 02:31:30.043050 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" Jan 28 02:31:30.045878 kubelet[2700]: E0128 02:31:30.045269 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.045878 kubelet[2700]: E0128 02:31:30.045301 2700 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" Jan 28 02:31:30.045878 kubelet[2700]: E0128 02:31:30.045309 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f9cc9d84b-4zj2q" Jan 28 02:31:30.046106 kubelet[2700]: E0128 02:31:30.045332 2700 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f9cc9d84b-4zj2q" Jan 28 02:31:30.046106 kubelet[2700]: E0128 02:31:30.045347 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-858bccccf6-bqm86_calico-system(19aa6a03-3b76-49c3-840d-da43872b111b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-858bccccf6-bqm86_calico-system(19aa6a03-3b76-49c3-840d-da43872b111b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:31:30.046106 kubelet[2700]: E0128 02:31:30.045378 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f9cc9d84b-4zj2q_calico-system(7df7937d-1785-494d-97a4-262107c3cdf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f9cc9d84b-4zj2q_calico-system(7df7937d-1785-494d-97a4-262107c3cdf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f9cc9d84b-4zj2q" podUID="7df7937d-1785-494d-97a4-262107c3cdf6" Jan 28 02:31:30.046402 kubelet[2700]: E0128 02:31:30.045416 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.046402 kubelet[2700]: E0128 02:31:30.045440 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.046402 kubelet[2700]: E0128 02:31:30.045454 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mqzws" Jan 28 02:31:30.046402 kubelet[2700]: E0128 02:31:30.045469 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bvmzd" Jan 28 02:31:30.046647 kubelet[2700]: E0128 02:31:30.045477 2700 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mqzws" Jan 28 02:31:30.046647 kubelet[2700]: E0128 02:31:30.045489 2700 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bvmzd" Jan 28 02:31:30.046647 kubelet[2700]: E0128 02:31:30.045530 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mqzws_kube-system(cb78a5cb-4de2-4536-925b-fdddfbef361f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mqzws_kube-system(cb78a5cb-4de2-4536-925b-fdddfbef361f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mqzws" podUID="cb78a5cb-4de2-4536-925b-fdddfbef361f" Jan 28 02:31:30.046846 kubelet[2700]: E0128 02:31:30.045422 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.046846 kubelet[2700]: E0128 02:31:30.045576 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b4mnx" Jan 28 02:31:30.046846 kubelet[2700]: E0128 02:31:30.045596 2700 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b4mnx" Jan 28 02:31:30.046973 kubelet[2700]: E0128 02:31:30.045529 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-bvmzd_calico-system(345054d8-51ec-4ec2-90c7-329ebe97ba46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-bvmzd_calico-system(345054d8-51ec-4ec2-90c7-329ebe97ba46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:31:30.047625 kubelet[2700]: E0128 02:31:30.047474 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-b4mnx_kube-system(bc7a2646-8a27-4b05-8c51-22c9804a41de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-b4mnx_kube-system(bc7a2646-8a27-4b05-8c51-22c9804a41de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-b4mnx" podUID="bc7a2646-8a27-4b05-8c51-22c9804a41de" Jan 28 02:31:30.116502 kubelet[2700]: I0128 02:31:30.116318 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:31:30.117730 kubelet[2700]: I0128 02:31:30.117703 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:31:30.129881 kubelet[2700]: I0128 02:31:30.129501 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:31:30.131077 kubelet[2700]: I0128 02:31:30.131051 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:31:30.133585 kubelet[2700]: I0128 02:31:30.133235 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:31:30.139764 kubelet[2700]: I0128 02:31:30.139341 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:31:30.143017 kubelet[2700]: I0128 02:31:30.142991 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:31:30.162573 containerd[1508]: time="2026-01-28T02:31:30.161763021Z" level=info msg="StopPodSandbox for \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\"" Jan 28 02:31:30.162779 containerd[1508]: time="2026-01-28T02:31:30.162748783Z" level=info msg="StopPodSandbox for \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\"" Jan 28 02:31:30.163686 containerd[1508]: time="2026-01-28T02:31:30.163655543Z" level=info msg="Ensure that sandbox c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193 in task-service has been cleanup successfully" Jan 28 02:31:30.163811 containerd[1508]: time="2026-01-28T02:31:30.163778770Z" level=info msg="StopPodSandbox for \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\"" Jan 28 02:31:30.164325 containerd[1508]: time="2026-01-28T02:31:30.164212823Z" level=info msg="Ensure that sandbox 066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa in task-service has been cleanup successfully" Jan 28 02:31:30.165818 containerd[1508]: time="2026-01-28T02:31:30.165602427Z" level=info msg="StopPodSandbox for \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\"" Jan 28 02:31:30.169797 containerd[1508]: time="2026-01-28T02:31:30.169764366Z" level=info msg="StopPodSandbox for \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\"" Jan 28 02:31:30.170191 containerd[1508]: time="2026-01-28T02:31:30.170134106Z" level=info msg="Ensure that sandbox dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d in task-service has been cleanup successfully" Jan 28 02:31:30.172924 containerd[1508]: time="2026-01-28T02:31:30.172894877Z" level=info msg="StopPodSandbox for \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\"" Jan 28 02:31:30.186677 containerd[1508]: time="2026-01-28T02:31:30.186562621Z" level=info msg="Ensure that sandbox 7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd in task-service has been cleanup successfully" Jan 28 02:31:30.187856 containerd[1508]: time="2026-01-28T02:31:30.187703795Z" level=info msg="Ensure that sandbox e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1 in task-service has been cleanup successfully" Jan 28 02:31:30.187856 containerd[1508]: time="2026-01-28T02:31:30.163675383Z" level=info msg="Ensure that sandbox b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296 in task-service has been cleanup successfully" Jan 28 02:31:30.189168 containerd[1508]: time="2026-01-28T02:31:30.163744167Z" level=info msg="StopPodSandbox for \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\"" Jan 28 02:31:30.194517 containerd[1508]: time="2026-01-28T02:31:30.194485826Z" level=info msg="Ensure that sandbox eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2 in task-service has been cleanup successfully" Jan 28 02:31:30.288813 containerd[1508]: time="2026-01-28T02:31:30.288339738Z" level=error msg="StopPodSandbox for \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\" failed" error="failed to destroy network for sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.295955 containerd[1508]: time="2026-01-28T02:31:30.294550630Z" level=error msg="StopPodSandbox for \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\" failed" error="failed to destroy network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.296779 kubelet[2700]: E0128 02:31:30.296252 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:31:30.296978 kubelet[2700]: E0128 02:31:30.296913 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:31:30.319314 kubelet[2700]: E0128 02:31:30.296955 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193"} Jan 28 02:31:30.319314 kubelet[2700]: E0128 02:31:30.319049 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a6be4a3-a931-4bdf-98fa-3be5929a5064\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:30.319314 kubelet[2700]: E0128 02:31:30.319108 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a6be4a3-a931-4bdf-98fa-3be5929a5064\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:31:30.319314 kubelet[2700]: E0128 02:31:30.296354 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d"} Jan 28 02:31:30.319752 kubelet[2700]: E0128 02:31:30.319240 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19aa6a03-3b76-49c3-840d-da43872b111b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:30.319752 kubelet[2700]: E0128 02:31:30.319270 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19aa6a03-3b76-49c3-840d-da43872b111b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:31:30.347711 containerd[1508]: time="2026-01-28T02:31:30.347625431Z" level=error msg="StopPodSandbox for \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\" failed" error="failed to destroy network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.349027 kubelet[2700]: E0128 02:31:30.348745 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:31:30.349027 kubelet[2700]: E0128 02:31:30.348813 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa"} Jan 28 02:31:30.349027 kubelet[2700]: E0128 02:31:30.348881 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc7a2646-8a27-4b05-8c51-22c9804a41de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:30.349027 kubelet[2700]: E0128 02:31:30.348933 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc7a2646-8a27-4b05-8c51-22c9804a41de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-b4mnx" podUID="bc7a2646-8a27-4b05-8c51-22c9804a41de" Jan 28 02:31:30.362211 containerd[1508]: time="2026-01-28T02:31:30.361127484Z" level=error msg="StopPodSandbox for \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\" failed" error="failed to destroy network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.362367 kubelet[2700]: E0128 02:31:30.361633 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:31:30.362367 kubelet[2700]: E0128 02:31:30.361719 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2"} Jan 28 02:31:30.362367 kubelet[2700]: E0128 02:31:30.361769 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd42b56d-5021-410e-8408-e15b3c52f065\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:30.362367 kubelet[2700]: E0128 02:31:30.361801 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd42b56d-5021-410e-8408-e15b3c52f065\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:31:30.364139 containerd[1508]: time="2026-01-28T02:31:30.363426065Z" level=error msg="StopPodSandbox for \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\" failed" error="failed to destroy network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.364294 kubelet[2700]: E0128 02:31:30.363689 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:31:30.364294 kubelet[2700]: E0128 02:31:30.363729 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296"} Jan 28 02:31:30.364294 kubelet[2700]: E0128 02:31:30.363784 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7df7937d-1785-494d-97a4-262107c3cdf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:30.364294 kubelet[2700]: E0128 02:31:30.363854 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7df7937d-1785-494d-97a4-262107c3cdf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f9cc9d84b-4zj2q" podUID="7df7937d-1785-494d-97a4-262107c3cdf6" Jan 28 02:31:30.366195 containerd[1508]: time="2026-01-28T02:31:30.366127265Z" level=error msg="StopPodSandbox for \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\" failed" error="failed to destroy network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.366530 kubelet[2700]: E0128 02:31:30.366483 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:31:30.366595 kubelet[2700]: E0128 02:31:30.366538 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd"} Jan 28 02:31:30.366595 kubelet[2700]: E0128 02:31:30.366578 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"345054d8-51ec-4ec2-90c7-329ebe97ba46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:30.366771 kubelet[2700]: E0128 02:31:30.366606 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"345054d8-51ec-4ec2-90c7-329ebe97ba46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:31:30.367847 containerd[1508]: time="2026-01-28T02:31:30.367808272Z" level=error msg="StopPodSandbox for \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\" failed" error="failed to destroy network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.368179 kubelet[2700]: E0128 02:31:30.368126 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:31:30.368284 kubelet[2700]: E0128 02:31:30.368190 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1"} Jan 28 02:31:30.368284 kubelet[2700]: E0128 02:31:30.368226 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb78a5cb-4de2-4536-925b-fdddfbef361f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:30.368441 kubelet[2700]: E0128 02:31:30.368268 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb78a5cb-4de2-4536-925b-fdddfbef361f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mqzws" podUID="cb78a5cb-4de2-4536-925b-fdddfbef361f" Jan 28 02:31:30.806467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296-shm.mount: Deactivated successfully. Jan 28 02:31:30.806683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd-shm.mount: Deactivated successfully. Jan 28 02:31:30.867987 systemd[1]: Created slice kubepods-besteffort-pod7c66daa0_da57_4a7e_a3d9_e335fd8bbbe5.slice - libcontainer container kubepods-besteffort-pod7c66daa0_da57_4a7e_a3d9_e335fd8bbbe5.slice. Jan 28 02:31:30.874005 containerd[1508]: time="2026-01-28T02:31:30.873943218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vjdx,Uid:7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5,Namespace:calico-system,Attempt:0,}" Jan 28 02:31:30.990267 containerd[1508]: time="2026-01-28T02:31:30.990169198Z" level=error msg="Failed to destroy network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.992803 containerd[1508]: time="2026-01-28T02:31:30.992665689Z" level=error msg="encountered an error cleaning up failed sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.992803 containerd[1508]: time="2026-01-28T02:31:30.992766959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vjdx,Uid:7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.994615 kubelet[2700]: E0128 02:31:30.994558 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:30.995169 kubelet[2700]: E0128 02:31:30.994839 2700 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9vjdx" Jan 28 02:31:30.995169 kubelet[2700]: E0128 02:31:30.994910 2700 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9vjdx" Jan 28 02:31:30.995169 kubelet[2700]: E0128 02:31:30.994998 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9vjdx_calico-system(7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9vjdx_calico-system(7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:30.995846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98-shm.mount: Deactivated successfully. Jan 28 02:31:31.147602 kubelet[2700]: I0128 02:31:31.147415 2700 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:31:31.150206 containerd[1508]: time="2026-01-28T02:31:31.148902429Z" level=info msg="StopPodSandbox for \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\"" Jan 28 02:31:31.150206 containerd[1508]: time="2026-01-28T02:31:31.149286135Z" level=info msg="Ensure that sandbox e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98 in task-service has been cleanup successfully" Jan 28 02:31:31.188161 containerd[1508]: time="2026-01-28T02:31:31.188075917Z" level=error msg="StopPodSandbox for \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\" failed" error="failed to destroy network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:31.188678 kubelet[2700]: E0128 02:31:31.188605 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:31:31.189006 kubelet[2700]: E0128 02:31:31.188791 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98"} Jan 28 02:31:31.189006 kubelet[2700]: E0128 02:31:31.188840 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:31.189006 kubelet[2700]: E0128 02:31:31.188885 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:40.861811 containerd[1508]: time="2026-01-28T02:31:40.858369922Z" level=info msg="StopPodSandbox for \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\"" Jan 28 02:31:40.866226 containerd[1508]: time="2026-01-28T02:31:40.865955313Z" level=info msg="StopPodSandbox for \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\"" Jan 28 02:31:41.013645 containerd[1508]: time="2026-01-28T02:31:41.013564304Z" level=error msg="StopPodSandbox for \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\" failed" error="failed to destroy network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:41.014335 kubelet[2700]: E0128 02:31:41.014214 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:31:41.015004 kubelet[2700]: E0128 02:31:41.014362 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d"} Jan 28 02:31:41.015004 kubelet[2700]: E0128 02:31:41.014428 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19aa6a03-3b76-49c3-840d-da43872b111b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:41.015004 kubelet[2700]: E0128 02:31:41.014486 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19aa6a03-3b76-49c3-840d-da43872b111b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:31:41.037620 containerd[1508]: time="2026-01-28T02:31:41.037473763Z" level=error msg="StopPodSandbox for \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\" failed" error="failed to destroy network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:41.038752 kubelet[2700]: E0128 02:31:41.038187 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:31:41.038752 kubelet[2700]: E0128 02:31:41.038277 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296"} Jan 28 02:31:41.038752 kubelet[2700]: E0128 02:31:41.038670 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7df7937d-1785-494d-97a4-262107c3cdf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:41.038752 kubelet[2700]: E0128 02:31:41.038707 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7df7937d-1785-494d-97a4-262107c3cdf6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f9cc9d84b-4zj2q" podUID="7df7937d-1785-494d-97a4-262107c3cdf6" Jan 28 02:31:42.858227 containerd[1508]: time="2026-01-28T02:31:42.856346906Z" level=info msg="StopPodSandbox for \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\"" Jan 28 02:31:42.873187 containerd[1508]: time="2026-01-28T02:31:42.864266907Z" level=info msg="StopPodSandbox for \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\"" Jan 28 02:31:43.031565 containerd[1508]: time="2026-01-28T02:31:43.031450831Z" level=error msg="StopPodSandbox for \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\" failed" error="failed to destroy network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:43.032135 kubelet[2700]: E0128 02:31:43.032055 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:31:43.033094 kubelet[2700]: E0128 02:31:43.032911 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa"} Jan 28 02:31:43.033094 kubelet[2700]: E0128 02:31:43.033002 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc7a2646-8a27-4b05-8c51-22c9804a41de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:43.033094 kubelet[2700]: E0128 02:31:43.033046 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc7a2646-8a27-4b05-8c51-22c9804a41de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-b4mnx" podUID="bc7a2646-8a27-4b05-8c51-22c9804a41de" Jan 28 02:31:43.038512 containerd[1508]: time="2026-01-28T02:31:43.038446019Z" level=error msg="StopPodSandbox for \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\" failed" error="failed to destroy network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:43.038891 kubelet[2700]: E0128 02:31:43.038721 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:31:43.038891 kubelet[2700]: E0128 02:31:43.038771 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1"} Jan 28 02:31:43.038891 kubelet[2700]: E0128 02:31:43.038818 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb78a5cb-4de2-4536-925b-fdddfbef361f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:43.038891 kubelet[2700]: E0128 02:31:43.038852 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb78a5cb-4de2-4536-925b-fdddfbef361f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mqzws" podUID="cb78a5cb-4de2-4536-925b-fdddfbef361f" Jan 28 02:31:43.855936 containerd[1508]: time="2026-01-28T02:31:43.855411499Z" level=info msg="StopPodSandbox for \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\"" Jan 28 02:31:43.864717 containerd[1508]: time="2026-01-28T02:31:43.856430197Z" level=info msg="StopPodSandbox for \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\"" Jan 28 02:31:43.970386 containerd[1508]: time="2026-01-28T02:31:43.969378284Z" level=error msg="StopPodSandbox for \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\" failed" error="failed to destroy network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:43.970567 kubelet[2700]: E0128 02:31:43.969731 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:31:43.970567 kubelet[2700]: E0128 02:31:43.969818 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd"} Jan 28 02:31:43.970567 kubelet[2700]: E0128 02:31:43.969875 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"345054d8-51ec-4ec2-90c7-329ebe97ba46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:43.970567 kubelet[2700]: E0128 02:31:43.969919 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"345054d8-51ec-4ec2-90c7-329ebe97ba46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:31:43.972261 containerd[1508]: time="2026-01-28T02:31:43.971265303Z" level=error msg="StopPodSandbox for \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\" failed" error="failed to destroy network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:43.972372 kubelet[2700]: E0128 02:31:43.971608 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:31:43.972372 kubelet[2700]: E0128 02:31:43.971661 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98"} Jan 28 02:31:43.972372 kubelet[2700]: E0128 02:31:43.971696 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:43.972372 kubelet[2700]: E0128 02:31:43.971727 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:44.029570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444025347.mount: Deactivated successfully. Jan 28 02:31:44.178353 containerd[1508]: time="2026-01-28T02:31:44.177892192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:44.181328 containerd[1508]: time="2026-01-28T02:31:44.181176165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 28 02:31:44.217416 containerd[1508]: time="2026-01-28T02:31:44.217296800Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:44.221409 containerd[1508]: time="2026-01-28T02:31:44.220696706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:31:44.225789 containerd[1508]: time="2026-01-28T02:31:44.225733846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 15.092567453s" Jan 28 02:31:44.225942 containerd[1508]: time="2026-01-28T02:31:44.225825387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 02:31:44.272772 containerd[1508]: time="2026-01-28T02:31:44.271517428Z" level=info msg="CreateContainer within sandbox \"210e3d72ae21255818cceda12f15fb8e6323d1470b02662b16627aad01a251ba\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 02:31:44.349509 containerd[1508]: time="2026-01-28T02:31:44.349425237Z" level=info msg="CreateContainer within sandbox \"210e3d72ae21255818cceda12f15fb8e6323d1470b02662b16627aad01a251ba\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"07eda67b6b986dba0a4e6c7fcee6ace3a8abef9d9edc634cf395262b1371fd74\"" Jan 28 02:31:44.356264 containerd[1508]: time="2026-01-28T02:31:44.355825204Z" level=info msg="StartContainer for \"07eda67b6b986dba0a4e6c7fcee6ace3a8abef9d9edc634cf395262b1371fd74\"" Jan 28 02:31:44.539457 systemd[1]: Started cri-containerd-07eda67b6b986dba0a4e6c7fcee6ace3a8abef9d9edc634cf395262b1371fd74.scope - libcontainer container 07eda67b6b986dba0a4e6c7fcee6ace3a8abef9d9edc634cf395262b1371fd74. Jan 28 02:31:44.599880 containerd[1508]: time="2026-01-28T02:31:44.599826310Z" level=info msg="StartContainer for \"07eda67b6b986dba0a4e6c7fcee6ace3a8abef9d9edc634cf395262b1371fd74\" returns successfully" Jan 28 02:31:44.915805 containerd[1508]: time="2026-01-28T02:31:44.914903594Z" level=info msg="StopPodSandbox for \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\"" Jan 28 02:31:45.003751 containerd[1508]: time="2026-01-28T02:31:45.003269671Z" level=error msg="StopPodSandbox for \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\" failed" error="failed to destroy network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:31:45.006107 kubelet[2700]: E0128 02:31:45.004671 2700 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:31:45.006107 kubelet[2700]: E0128 02:31:45.004846 2700 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2"} Jan 28 02:31:45.006107 kubelet[2700]: E0128 02:31:45.004935 2700 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd42b56d-5021-410e-8408-e15b3c52f065\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:31:45.006107 kubelet[2700]: E0128 02:31:45.004999 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd42b56d-5021-410e-8408-e15b3c52f065\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:31:45.082876 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 02:31:45.084005 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 02:31:45.357459 kubelet[2700]: I0128 02:31:45.351500 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-47dzk" podStartSLOduration=2.367021783 podStartE2EDuration="31.349052443s" podCreationTimestamp="2026-01-28 02:31:14 +0000 UTC" firstStartedPulling="2026-01-28 02:31:15.246735249 +0000 UTC m=+36.593282765" lastFinishedPulling="2026-01-28 02:31:44.228765903 +0000 UTC m=+65.575313425" observedRunningTime="2026-01-28 02:31:45.339069943 +0000 UTC m=+66.685617480" watchObservedRunningTime="2026-01-28 02:31:45.349052443 +0000 UTC m=+66.695599961" Jan 28 02:31:45.408515 systemd[1]: run-containerd-runc-k8s.io-07eda67b6b986dba0a4e6c7fcee6ace3a8abef9d9edc634cf395262b1371fd74-runc.M9hj5w.mount: Deactivated successfully. Jan 28 02:31:45.473186 containerd[1508]: time="2026-01-28T02:31:45.473097959Z" level=info msg="StopPodSandbox for \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\"" Jan 28 02:31:45.855376 containerd[1508]: time="2026-01-28T02:31:45.855218666Z" level=info msg="StopPodSandbox for \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\"" Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.637 [INFO][4050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.638 [INFO][4050] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" iface="eth0" netns="/var/run/netns/cni-47b1484c-c76f-37e2-0eb3-6cdc42a65996" Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.638 [INFO][4050] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" iface="eth0" netns="/var/run/netns/cni-47b1484c-c76f-37e2-0eb3-6cdc42a65996" Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.639 [INFO][4050] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" iface="eth0" netns="/var/run/netns/cni-47b1484c-c76f-37e2-0eb3-6cdc42a65996" Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.639 [INFO][4050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.639 [INFO][4050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.915 [INFO][4061] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" HandleID="k8s-pod-network.b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.918 [INFO][4061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.919 [INFO][4061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.951 [WARNING][4061] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" HandleID="k8s-pod-network.b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.952 [INFO][4061] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" HandleID="k8s-pod-network.b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.957 [INFO][4061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:45.975645 containerd[1508]: 2026-01-28 02:31:45.962 [INFO][4050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:31:45.980178 containerd[1508]: time="2026-01-28T02:31:45.979411584Z" level=info msg="TearDown network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\" successfully" Jan 28 02:31:45.980178 containerd[1508]: time="2026-01-28T02:31:45.979478189Z" level=info msg="StopPodSandbox for \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\" returns successfully" Jan 28 02:31:45.986041 systemd[1]: run-netns-cni\x2d47b1484c\x2dc76f\x2d37e2\x2d0eb3\x2d6cdc42a65996.mount: Deactivated successfully. Jan 28 02:31:46.033174 kubelet[2700]: I0128 02:31:46.032303 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kghlk\" (UniqueName: \"kubernetes.io/projected/7df7937d-1785-494d-97a4-262107c3cdf6-kube-api-access-kghlk\") pod \"7df7937d-1785-494d-97a4-262107c3cdf6\" (UID: \"7df7937d-1785-494d-97a4-262107c3cdf6\") " Jan 28 02:31:46.033174 kubelet[2700]: I0128 02:31:46.032409 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df7937d-1785-494d-97a4-262107c3cdf6-whisker-ca-bundle\") pod \"7df7937d-1785-494d-97a4-262107c3cdf6\" (UID: \"7df7937d-1785-494d-97a4-262107c3cdf6\") " Jan 28 02:31:46.033174 kubelet[2700]: I0128 02:31:46.032456 2700 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7df7937d-1785-494d-97a4-262107c3cdf6-whisker-backend-key-pair\") pod \"7df7937d-1785-494d-97a4-262107c3cdf6\" (UID: \"7df7937d-1785-494d-97a4-262107c3cdf6\") " Jan 28 02:31:46.044101 kubelet[2700]: I0128 02:31:46.042812 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df7937d-1785-494d-97a4-262107c3cdf6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7df7937d-1785-494d-97a4-262107c3cdf6" (UID: "7df7937d-1785-494d-97a4-262107c3cdf6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 02:31:46.079394 systemd[1]: var-lib-kubelet-pods-7df7937d\x2d1785\x2d494d\x2d97a4\x2d262107c3cdf6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkghlk.mount: Deactivated successfully. Jan 28 02:31:46.079580 systemd[1]: var-lib-kubelet-pods-7df7937d\x2d1785\x2d494d\x2d97a4\x2d262107c3cdf6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 02:31:46.083113 kubelet[2700]: I0128 02:31:46.082848 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df7937d-1785-494d-97a4-262107c3cdf6-kube-api-access-kghlk" (OuterVolumeSpecName: "kube-api-access-kghlk") pod "7df7937d-1785-494d-97a4-262107c3cdf6" (UID: "7df7937d-1785-494d-97a4-262107c3cdf6"). InnerVolumeSpecName "kube-api-access-kghlk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 02:31:46.086215 kubelet[2700]: I0128 02:31:46.085553 2700 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df7937d-1785-494d-97a4-262107c3cdf6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7df7937d-1785-494d-97a4-262107c3cdf6" (UID: "7df7937d-1785-494d-97a4-262107c3cdf6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 02:31:46.138395 kubelet[2700]: I0128 02:31:46.138211 2700 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kghlk\" (UniqueName: \"kubernetes.io/projected/7df7937d-1785-494d-97a4-262107c3cdf6-kube-api-access-kghlk\") on node \"srv-hg60y.gb1.brightbox.com\" DevicePath \"\"" Jan 28 02:31:46.138395 kubelet[2700]: I0128 02:31:46.138282 2700 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df7937d-1785-494d-97a4-262107c3cdf6-whisker-ca-bundle\") on node \"srv-hg60y.gb1.brightbox.com\" DevicePath \"\"" Jan 28 02:31:46.138395 kubelet[2700]: I0128 02:31:46.138354 2700 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7df7937d-1785-494d-97a4-262107c3cdf6-whisker-backend-key-pair\") on node \"srv-hg60y.gb1.brightbox.com\" DevicePath \"\"" Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.020 [INFO][4076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.021 [INFO][4076] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" iface="eth0" netns="/var/run/netns/cni-4f5f8501-bebf-ad3c-4f65-5a3f39555550" Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.021 [INFO][4076] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" iface="eth0" netns="/var/run/netns/cni-4f5f8501-bebf-ad3c-4f65-5a3f39555550" Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.024 [INFO][4076] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" iface="eth0" netns="/var/run/netns/cni-4f5f8501-bebf-ad3c-4f65-5a3f39555550" Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.025 [INFO][4076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.025 [INFO][4076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.128 [INFO][4085] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" HandleID="k8s-pod-network.c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.129 [INFO][4085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.129 [INFO][4085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.150 [WARNING][4085] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" HandleID="k8s-pod-network.c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.150 [INFO][4085] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" HandleID="k8s-pod-network.c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.153 [INFO][4085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:46.165666 containerd[1508]: 2026-01-28 02:31:46.159 [INFO][4076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:31:46.170103 systemd[1]: run-netns-cni\x2d4f5f8501\x2dbebf\x2dad3c\x2d4f65\x2d5a3f39555550.mount: Deactivated successfully. Jan 28 02:31:46.170784 containerd[1508]: time="2026-01-28T02:31:46.170262537Z" level=info msg="TearDown network for sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\" successfully" Jan 28 02:31:46.170784 containerd[1508]: time="2026-01-28T02:31:46.170307842Z" level=info msg="StopPodSandbox for \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\" returns successfully" Jan 28 02:31:46.173878 containerd[1508]: time="2026-01-28T02:31:46.173616468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7866ff566b-wrtzl,Uid:0a6be4a3-a931-4bdf-98fa-3be5929a5064,Namespace:calico-apiserver,Attempt:1,}" Jan 28 02:31:46.297259 systemd[1]: Removed slice kubepods-besteffort-pod7df7937d_1785_494d_97a4_262107c3cdf6.slice - libcontainer container kubepods-besteffort-pod7df7937d_1785_494d_97a4_262107c3cdf6.slice. Jan 28 02:31:46.581466 systemd[1]: Created slice kubepods-besteffort-pod9d9dfd5e_a429_4d13_9ec6_6f8b582ac456.slice - libcontainer container kubepods-besteffort-pod9d9dfd5e_a429_4d13_9ec6_6f8b582ac456.slice. Jan 28 02:31:46.642694 kubelet[2700]: I0128 02:31:46.642535 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d9dfd5e-a429-4d13-9ec6-6f8b582ac456-whisker-ca-bundle\") pod \"whisker-cd7f4d764-szmpb\" (UID: \"9d9dfd5e-a429-4d13-9ec6-6f8b582ac456\") " pod="calico-system/whisker-cd7f4d764-szmpb" Jan 28 02:31:46.642694 kubelet[2700]: I0128 02:31:46.642636 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59xvb\" (UniqueName: \"kubernetes.io/projected/9d9dfd5e-a429-4d13-9ec6-6f8b582ac456-kube-api-access-59xvb\") pod \"whisker-cd7f4d764-szmpb\" (UID: \"9d9dfd5e-a429-4d13-9ec6-6f8b582ac456\") " pod="calico-system/whisker-cd7f4d764-szmpb" Jan 28 02:31:46.642694 kubelet[2700]: I0128 02:31:46.642702 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9d9dfd5e-a429-4d13-9ec6-6f8b582ac456-whisker-backend-key-pair\") pod \"whisker-cd7f4d764-szmpb\" (UID: \"9d9dfd5e-a429-4d13-9ec6-6f8b582ac456\") " pod="calico-system/whisker-cd7f4d764-szmpb" Jan 28 02:31:46.745204 systemd-networkd[1438]: cali64f27eb97e7: Link UP Jan 28 02:31:46.752570 systemd-networkd[1438]: cali64f27eb97e7: Gained carrier Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.351 [INFO][4094] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.385 [INFO][4094] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0 calico-apiserver-7866ff566b- calico-apiserver 0a6be4a3-a931-4bdf-98fa-3be5929a5064 956 0 2026-01-28 02:31:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7866ff566b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-hg60y.gb1.brightbox.com calico-apiserver-7866ff566b-wrtzl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali64f27eb97e7 [] [] }} ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-wrtzl" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.386 [INFO][4094] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-wrtzl" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.490 [INFO][4137] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" HandleID="k8s-pod-network.61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.490 [INFO][4137] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" HandleID="k8s-pod-network.61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-hg60y.gb1.brightbox.com", "pod":"calico-apiserver-7866ff566b-wrtzl", "timestamp":"2026-01-28 02:31:46.490377998 +0000 UTC"}, Hostname:"srv-hg60y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.490 [INFO][4137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.490 [INFO][4137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.490 [INFO][4137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hg60y.gb1.brightbox.com' Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.531 [INFO][4137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.619 [INFO][4137] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.654 [INFO][4137] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.662 [INFO][4137] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.667 [INFO][4137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.667 [INFO][4137] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.671 [INFO][4137] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.685 [INFO][4137] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.717 [INFO][4137] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.1/26] block=192.168.123.0/26 handle="k8s-pod-network.61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.717 [INFO][4137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.1/26] handle="k8s-pod-network.61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.717 [INFO][4137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:46.818717 containerd[1508]: 2026-01-28 02:31:46.717 [INFO][4137] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.1/26] IPv6=[] ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" HandleID="k8s-pod-network.61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:31:46.821810 containerd[1508]: 2026-01-28 02:31:46.721 [INFO][4094] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-wrtzl" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0", GenerateName:"calico-apiserver-7866ff566b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a6be4a3-a931-4bdf-98fa-3be5929a5064", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7866ff566b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7866ff566b-wrtzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64f27eb97e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:46.821810 containerd[1508]: 2026-01-28 02:31:46.722 [INFO][4094] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.1/32] ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-wrtzl" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:31:46.821810 containerd[1508]: 2026-01-28 02:31:46.722 [INFO][4094] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64f27eb97e7 ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-wrtzl" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:31:46.821810 containerd[1508]: 2026-01-28 02:31:46.756 [INFO][4094] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-wrtzl" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:31:46.821810 containerd[1508]: 2026-01-28 02:31:46.757 [INFO][4094] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-wrtzl" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0", GenerateName:"calico-apiserver-7866ff566b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a6be4a3-a931-4bdf-98fa-3be5929a5064", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7866ff566b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d", Pod:"calico-apiserver-7866ff566b-wrtzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64f27eb97e7", MAC:"4a:b7:9c:9c:d7:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:46.821810 containerd[1508]: 2026-01-28 02:31:46.814 [INFO][4094] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-wrtzl" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:31:46.861351 kubelet[2700]: I0128 02:31:46.860704 2700 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df7937d-1785-494d-97a4-262107c3cdf6" path="/var/lib/kubelet/pods/7df7937d-1785-494d-97a4-262107c3cdf6/volumes" Jan 28 02:31:46.887170 containerd[1508]: time="2026-01-28T02:31:46.886914425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:31:46.887170 containerd[1508]: time="2026-01-28T02:31:46.887030103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:31:46.887170 containerd[1508]: time="2026-01-28T02:31:46.887047599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:46.888198 containerd[1508]: time="2026-01-28T02:31:46.888001117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:46.908514 containerd[1508]: time="2026-01-28T02:31:46.908455578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd7f4d764-szmpb,Uid:9d9dfd5e-a429-4d13-9ec6-6f8b582ac456,Namespace:calico-system,Attempt:0,}" Jan 28 02:31:46.937430 systemd[1]: Started cri-containerd-61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d.scope - libcontainer container 61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d. Jan 28 02:31:47.159496 containerd[1508]: time="2026-01-28T02:31:47.159218736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7866ff566b-wrtzl,Uid:0a6be4a3-a931-4bdf-98fa-3be5929a5064,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d\"" Jan 28 02:31:47.166202 containerd[1508]: time="2026-01-28T02:31:47.162058874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:31:47.299500 systemd-networkd[1438]: cali5d569fabe7e: Link UP Jan 28 02:31:47.302159 systemd-networkd[1438]: cali5d569fabe7e: Gained carrier Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.014 [INFO][4180] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.035 [INFO][4180] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0 whisker-cd7f4d764- calico-system 9d9dfd5e-a429-4d13-9ec6-6f8b582ac456 980 0 2026-01-28 02:31:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cd7f4d764 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-hg60y.gb1.brightbox.com whisker-cd7f4d764-szmpb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5d569fabe7e [] [] }} ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Namespace="calico-system" Pod="whisker-cd7f4d764-szmpb" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.036 [INFO][4180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Namespace="calico-system" Pod="whisker-cd7f4d764-szmpb" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.171 [INFO][4199] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" HandleID="k8s-pod-network.8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.171 [INFO][4199] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" HandleID="k8s-pod-network.8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hg60y.gb1.brightbox.com", "pod":"whisker-cd7f4d764-szmpb", "timestamp":"2026-01-28 02:31:47.171391315 +0000 UTC"}, Hostname:"srv-hg60y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.171 [INFO][4199] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.171 [INFO][4199] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.172 [INFO][4199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hg60y.gb1.brightbox.com' Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.184 [INFO][4199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.198 [INFO][4199] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.212 [INFO][4199] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.226 [INFO][4199] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.240 [INFO][4199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.241 [INFO][4199] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.250 [INFO][4199] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.261 [INFO][4199] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.283 [INFO][4199] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.2/26] block=192.168.123.0/26 handle="k8s-pod-network.8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.284 [INFO][4199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.2/26] handle="k8s-pod-network.8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.284 [INFO][4199] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:47.329108 containerd[1508]: 2026-01-28 02:31:47.284 [INFO][4199] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.2/26] IPv6=[] ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" HandleID="k8s-pod-network.8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0" Jan 28 02:31:47.331238 containerd[1508]: 2026-01-28 02:31:47.290 [INFO][4180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Namespace="calico-system" Pod="whisker-cd7f4d764-szmpb" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0", GenerateName:"whisker-cd7f4d764-", Namespace:"calico-system", SelfLink:"", UID:"9d9dfd5e-a429-4d13-9ec6-6f8b582ac456", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cd7f4d764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"", Pod:"whisker-cd7f4d764-szmpb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5d569fabe7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:47.331238 containerd[1508]: 2026-01-28 02:31:47.291 [INFO][4180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.2/32] ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Namespace="calico-system" Pod="whisker-cd7f4d764-szmpb" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0" Jan 28 02:31:47.331238 containerd[1508]: 2026-01-28 02:31:47.291 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d569fabe7e ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Namespace="calico-system" Pod="whisker-cd7f4d764-szmpb" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0" Jan 28 02:31:47.331238 containerd[1508]: 2026-01-28 02:31:47.303 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Namespace="calico-system" Pod="whisker-cd7f4d764-szmpb" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0" Jan 28 02:31:47.331238 containerd[1508]: 2026-01-28 02:31:47.305 [INFO][4180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Namespace="calico-system" Pod="whisker-cd7f4d764-szmpb" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0", GenerateName:"whisker-cd7f4d764-", Namespace:"calico-system", SelfLink:"", UID:"9d9dfd5e-a429-4d13-9ec6-6f8b582ac456", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cd7f4d764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b", Pod:"whisker-cd7f4d764-szmpb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.123.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5d569fabe7e", MAC:"ae:44:e4:20:b0:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:47.331238 containerd[1508]: 2026-01-28 02:31:47.324 [INFO][4180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b" Namespace="calico-system" Pod="whisker-cd7f4d764-szmpb" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-whisker--cd7f4d764--szmpb-eth0" Jan 28 02:31:47.364219 containerd[1508]: time="2026-01-28T02:31:47.363432518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:31:47.364219 containerd[1508]: time="2026-01-28T02:31:47.363521029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:31:47.367956 containerd[1508]: time="2026-01-28T02:31:47.363558080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:47.367956 containerd[1508]: time="2026-01-28T02:31:47.365058809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:47.429060 systemd[1]: Started cri-containerd-8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b.scope - libcontainer container 8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b. Jan 28 02:31:47.521646 containerd[1508]: time="2026-01-28T02:31:47.521572002Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:31:47.531657 containerd[1508]: time="2026-01-28T02:31:47.522727178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:31:47.531657 containerd[1508]: time="2026-01-28T02:31:47.522745980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:31:47.532373 kubelet[2700]: E0128 02:31:47.532060 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:31:47.532373 kubelet[2700]: E0128 02:31:47.532179 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:31:47.538973 kubelet[2700]: E0128 02:31:47.538852 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5szp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7866ff566b-wrtzl_calico-apiserver(0a6be4a3-a931-4bdf-98fa-3be5929a5064): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:31:47.540689 kubelet[2700]: E0128 02:31:47.540647 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:31:47.640602 containerd[1508]: time="2026-01-28T02:31:47.640435833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd7f4d764-szmpb,Uid:9d9dfd5e-a429-4d13-9ec6-6f8b582ac456,Namespace:calico-system,Attempt:0,} returns sandbox id \"8cf59515b866f0c53dea163755e7f17e0d5fdc6ec2b2b32daf81f4c1dfda230b\"" Jan 28 02:31:47.645533 containerd[1508]: time="2026-01-28T02:31:47.645186955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:31:47.958608 containerd[1508]: time="2026-01-28T02:31:47.958324855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:31:47.960723 containerd[1508]: time="2026-01-28T02:31:47.960110469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:31:47.960723 containerd[1508]: time="2026-01-28T02:31:47.960353383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 02:31:47.962529 kubelet[2700]: E0128 02:31:47.961103 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:31:47.962529 kubelet[2700]: E0128 02:31:47.961192 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:31:47.962529 kubelet[2700]: E0128 02:31:47.961359 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:50e6dbe8f85b451e9c2d8f88eee475c8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd7f4d764-szmpb_calico-system(9d9dfd5e-a429-4d13-9ec6-6f8b582ac456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:31:47.964452 containerd[1508]: time="2026-01-28T02:31:47.964323206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:31:48.201190 kernel: bpftool[4370]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 28 02:31:48.291629 kubelet[2700]: E0128 02:31:48.290724 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:31:48.334341 containerd[1508]: time="2026-01-28T02:31:48.334100412Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:31:48.336529 containerd[1508]: time="2026-01-28T02:31:48.336228341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:31:48.336529 containerd[1508]: time="2026-01-28T02:31:48.336311427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 02:31:48.337119 kubelet[2700]: E0128 02:31:48.336552 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:31:48.337119 kubelet[2700]: E0128 02:31:48.336627 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:31:48.337119 kubelet[2700]: E0128 02:31:48.336797 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd7f4d764-szmpb_calico-system(9d9dfd5e-a429-4d13-9ec6-6f8b582ac456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:31:48.349179 kubelet[2700]: E0128 02:31:48.349009 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd7f4d764-szmpb" podUID="9d9dfd5e-a429-4d13-9ec6-6f8b582ac456" Jan 28 02:31:48.363053 systemd-networkd[1438]: cali64f27eb97e7: Gained IPv6LL Jan 28 02:31:48.486334 systemd-networkd[1438]: cali5d569fabe7e: Gained IPv6LL Jan 28 02:31:48.613464 systemd-networkd[1438]: vxlan.calico: Link UP Jan 28 02:31:48.613764 systemd-networkd[1438]: vxlan.calico: Gained carrier Jan 28 02:31:49.297463 kubelet[2700]: E0128 02:31:49.297341 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd7f4d764-szmpb" podUID="9d9dfd5e-a429-4d13-9ec6-6f8b582ac456" Jan 28 02:31:50.278689 systemd-networkd[1438]: vxlan.calico: Gained IPv6LL Jan 28 02:31:53.855478 containerd[1508]: time="2026-01-28T02:31:53.855271973Z" level=info msg="StopPodSandbox for \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\"" Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:53.943 [INFO][4481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:53.943 [INFO][4481] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" iface="eth0" netns="/var/run/netns/cni-6496efa5-e2cb-e653-56fd-418770d7ba24" Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:53.944 [INFO][4481] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" iface="eth0" netns="/var/run/netns/cni-6496efa5-e2cb-e653-56fd-418770d7ba24" Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:53.945 [INFO][4481] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" iface="eth0" netns="/var/run/netns/cni-6496efa5-e2cb-e653-56fd-418770d7ba24" Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:53.945 [INFO][4481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:53.945 [INFO][4481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:54.003 [INFO][4488] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" HandleID="k8s-pod-network.dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:54.003 [INFO][4488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:54.003 [INFO][4488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:54.018 [WARNING][4488] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" HandleID="k8s-pod-network.dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:54.018 [INFO][4488] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" HandleID="k8s-pod-network.dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:54.022 [INFO][4488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:54.031179 containerd[1508]: 2026-01-28 02:31:54.026 [INFO][4481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:31:54.032612 containerd[1508]: time="2026-01-28T02:31:54.032366841Z" level=info msg="TearDown network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\" successfully" Jan 28 02:31:54.034243 containerd[1508]: time="2026-01-28T02:31:54.034202878Z" level=info msg="StopPodSandbox for \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\" returns successfully" Jan 28 02:31:54.036902 containerd[1508]: time="2026-01-28T02:31:54.035627452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bccccf6-bqm86,Uid:19aa6a03-3b76-49c3-840d-da43872b111b,Namespace:calico-system,Attempt:1,}" Jan 28 02:31:54.045392 systemd[1]: run-netns-cni\x2d6496efa5\x2de2cb\x2de653\x2d56fd\x2d418770d7ba24.mount: Deactivated successfully. Jan 28 02:31:54.256123 systemd-networkd[1438]: calic458fc7f0b7: Link UP Jan 28 02:31:54.256510 systemd-networkd[1438]: calic458fc7f0b7: Gained carrier Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.137 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0 calico-kube-controllers-858bccccf6- calico-system 19aa6a03-3b76-49c3-840d-da43872b111b 1030 0 2026-01-28 02:31:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:858bccccf6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-hg60y.gb1.brightbox.com calico-kube-controllers-858bccccf6-bqm86 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic458fc7f0b7 [] [] }} ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Namespace="calico-system" Pod="calico-kube-controllers-858bccccf6-bqm86" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.138 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Namespace="calico-system" Pod="calico-kube-controllers-858bccccf6-bqm86" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.181 [INFO][4512] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" HandleID="k8s-pod-network.c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.181 [INFO][4512] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" HandleID="k8s-pod-network.c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad310), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hg60y.gb1.brightbox.com", "pod":"calico-kube-controllers-858bccccf6-bqm86", "timestamp":"2026-01-28 02:31:54.181036264 +0000 UTC"}, Hostname:"srv-hg60y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.181 [INFO][4512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.181 [INFO][4512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.181 [INFO][4512] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hg60y.gb1.brightbox.com' Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.195 [INFO][4512] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.204 [INFO][4512] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.216 [INFO][4512] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.220 [INFO][4512] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.224 [INFO][4512] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.224 [INFO][4512] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.229 [INFO][4512] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.234 [INFO][4512] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.244 [INFO][4512] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.3/26] block=192.168.123.0/26 handle="k8s-pod-network.c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.245 [INFO][4512] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.3/26] handle="k8s-pod-network.c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.245 [INFO][4512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:54.292786 containerd[1508]: 2026-01-28 02:31:54.245 [INFO][4512] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.3/26] IPv6=[] ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" HandleID="k8s-pod-network.c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:31:54.298741 containerd[1508]: 2026-01-28 02:31:54.248 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Namespace="calico-system" Pod="calico-kube-controllers-858bccccf6-bqm86" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0", GenerateName:"calico-kube-controllers-858bccccf6-", Namespace:"calico-system", SelfLink:"", UID:"19aa6a03-3b76-49c3-840d-da43872b111b", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"858bccccf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-858bccccf6-bqm86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic458fc7f0b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:54.298741 containerd[1508]: 2026-01-28 02:31:54.249 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.3/32] ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Namespace="calico-system" Pod="calico-kube-controllers-858bccccf6-bqm86" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:31:54.298741 containerd[1508]: 2026-01-28 02:31:54.249 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic458fc7f0b7 ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Namespace="calico-system" Pod="calico-kube-controllers-858bccccf6-bqm86" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:31:54.298741 containerd[1508]: 2026-01-28 02:31:54.253 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Namespace="calico-system" Pod="calico-kube-controllers-858bccccf6-bqm86" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:31:54.298741 containerd[1508]: 2026-01-28 02:31:54.256 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Namespace="calico-system" Pod="calico-kube-controllers-858bccccf6-bqm86" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0", GenerateName:"calico-kube-controllers-858bccccf6-", Namespace:"calico-system", SelfLink:"", UID:"19aa6a03-3b76-49c3-840d-da43872b111b", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"858bccccf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd", Pod:"calico-kube-controllers-858bccccf6-bqm86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic458fc7f0b7", MAC:"c2:4e:7f:18:d4:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:54.298741 containerd[1508]: 2026-01-28 02:31:54.285 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd" Namespace="calico-system" Pod="calico-kube-controllers-858bccccf6-bqm86" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:31:54.335035 containerd[1508]: time="2026-01-28T02:31:54.334786981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:31:54.335035 containerd[1508]: time="2026-01-28T02:31:54.334897215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:31:54.335035 containerd[1508]: time="2026-01-28T02:31:54.334916299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:54.335899 containerd[1508]: time="2026-01-28T02:31:54.335071000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:54.406416 systemd[1]: Started cri-containerd-c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd.scope - libcontainer container c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd. Jan 28 02:31:54.482974 containerd[1508]: time="2026-01-28T02:31:54.482926653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bccccf6-bqm86,Uid:19aa6a03-3b76-49c3-840d-da43872b111b,Namespace:calico-system,Attempt:1,} returns sandbox id \"c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd\"" Jan 28 02:31:54.486555 containerd[1508]: time="2026-01-28T02:31:54.486449692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 02:31:54.800439 containerd[1508]: time="2026-01-28T02:31:54.800357347Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:31:54.801780 containerd[1508]: time="2026-01-28T02:31:54.801711335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 02:31:54.802121 containerd[1508]: time="2026-01-28T02:31:54.801728163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 02:31:54.804651 kubelet[2700]: E0128 02:31:54.802356 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:31:54.804651 kubelet[2700]: E0128 02:31:54.802474 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:31:54.804651 kubelet[2700]: E0128 02:31:54.802778 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5mfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-858bccccf6-bqm86_calico-system(19aa6a03-3b76-49c3-840d-da43872b111b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 02:31:54.804651 kubelet[2700]: E0128 02:31:54.804061 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:31:55.319317 kubelet[2700]: E0128 02:31:55.319185 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:31:55.855040 containerd[1508]: time="2026-01-28T02:31:55.854477599Z" level=info msg="StopPodSandbox for \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\"" Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.936 [INFO][4581] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.937 [INFO][4581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" iface="eth0" netns="/var/run/netns/cni-d45f3391-09ff-dd45-d8da-009da6160d9a" Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.938 [INFO][4581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" iface="eth0" netns="/var/run/netns/cni-d45f3391-09ff-dd45-d8da-009da6160d9a" Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.941 [INFO][4581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" iface="eth0" netns="/var/run/netns/cni-d45f3391-09ff-dd45-d8da-009da6160d9a" Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.941 [INFO][4581] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.941 [INFO][4581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.977 [INFO][4589] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" HandleID="k8s-pod-network.066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.977 [INFO][4589] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.977 [INFO][4589] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.988 [WARNING][4589] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" HandleID="k8s-pod-network.066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.988 [INFO][4589] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" HandleID="k8s-pod-network.066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.991 [INFO][4589] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:55.995961 containerd[1508]: 2026-01-28 02:31:55.993 [INFO][4581] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:31:55.997223 containerd[1508]: time="2026-01-28T02:31:55.997015155Z" level=info msg="TearDown network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\" successfully" Jan 28 02:31:55.997223 containerd[1508]: time="2026-01-28T02:31:55.997056100Z" level=info msg="StopPodSandbox for \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\" returns successfully" Jan 28 02:31:55.999216 containerd[1508]: time="2026-01-28T02:31:55.998910626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b4mnx,Uid:bc7a2646-8a27-4b05-8c51-22c9804a41de,Namespace:kube-system,Attempt:1,}" Jan 28 02:31:56.004214 systemd[1]: run-netns-cni\x2dd45f3391\x2d09ff\x2ddd45\x2dd8da\x2d009da6160d9a.mount: Deactivated successfully. Jan 28 02:31:56.166384 systemd-networkd[1438]: calic458fc7f0b7: Gained IPv6LL Jan 28 02:31:56.203013 systemd-networkd[1438]: cali984c4478d11: Link UP Jan 28 02:31:56.204819 systemd-networkd[1438]: cali984c4478d11: Gained carrier Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.068 [INFO][4596] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0 coredns-668d6bf9bc- kube-system bc7a2646-8a27-4b05-8c51-22c9804a41de 1044 0 2026-01-28 02:30:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-hg60y.gb1.brightbox.com coredns-668d6bf9bc-b4mnx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali984c4478d11 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-b4mnx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.068 [INFO][4596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-b4mnx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.122 [INFO][4608] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" HandleID="k8s-pod-network.91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.123 [INFO][4608] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" HandleID="k8s-pod-network.91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-hg60y.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-b4mnx", "timestamp":"2026-01-28 02:31:56.122502136 +0000 UTC"}, Hostname:"srv-hg60y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.123 [INFO][4608] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.123 [INFO][4608] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.123 [INFO][4608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hg60y.gb1.brightbox.com' Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.140 [INFO][4608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.156 [INFO][4608] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.165 [INFO][4608] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.169 [INFO][4608] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.174 [INFO][4608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.174 [INFO][4608] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.176 [INFO][4608] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.182 [INFO][4608] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.192 [INFO][4608] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.4/26] block=192.168.123.0/26 handle="k8s-pod-network.91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.192 [INFO][4608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.4/26] handle="k8s-pod-network.91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.192 [INFO][4608] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:56.237109 containerd[1508]: 2026-01-28 02:31:56.192 [INFO][4608] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.4/26] IPv6=[] ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" HandleID="k8s-pod-network.91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:31:56.239451 containerd[1508]: 2026-01-28 02:31:56.196 [INFO][4596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-b4mnx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bc7a2646-8a27-4b05-8c51-22c9804a41de", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-b4mnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali984c4478d11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:56.239451 containerd[1508]: 2026-01-28 02:31:56.196 [INFO][4596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.4/32] ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-b4mnx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:31:56.239451 containerd[1508]: 2026-01-28 02:31:56.197 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali984c4478d11 ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-b4mnx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:31:56.239451 containerd[1508]: 2026-01-28 02:31:56.206 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-b4mnx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:31:56.239451 containerd[1508]: 2026-01-28 02:31:56.209 [INFO][4596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-b4mnx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bc7a2646-8a27-4b05-8c51-22c9804a41de", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e", Pod:"coredns-668d6bf9bc-b4mnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali984c4478d11", MAC:"52:0b:65:dc:4a:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:56.239451 containerd[1508]: 2026-01-28 02:31:56.225 [INFO][4596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e" Namespace="kube-system" Pod="coredns-668d6bf9bc-b4mnx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:31:56.288796 containerd[1508]: time="2026-01-28T02:31:56.288619879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:31:56.289903 containerd[1508]: time="2026-01-28T02:31:56.289822525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:31:56.289903 containerd[1508]: time="2026-01-28T02:31:56.289867205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:56.290242 containerd[1508]: time="2026-01-28T02:31:56.290036602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:56.327184 kubelet[2700]: E0128 02:31:56.325975 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:31:56.339552 systemd[1]: Started cri-containerd-91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e.scope - libcontainer container 91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e. Jan 28 02:31:56.426456 containerd[1508]: time="2026-01-28T02:31:56.426075513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b4mnx,Uid:bc7a2646-8a27-4b05-8c51-22c9804a41de,Namespace:kube-system,Attempt:1,} returns sandbox id \"91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e\"" Jan 28 02:31:56.439432 containerd[1508]: time="2026-01-28T02:31:56.439353509Z" level=info msg="CreateContainer within sandbox \"91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 02:31:56.465567 containerd[1508]: time="2026-01-28T02:31:56.465473069Z" level=info msg="CreateContainer within sandbox \"91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0ec6373539f4518ebb935ad822002f3314501c5d9bb11f080fbb6c380cf8866\"" Jan 28 02:31:56.467833 containerd[1508]: time="2026-01-28T02:31:56.467677391Z" level=info msg="StartContainer for \"b0ec6373539f4518ebb935ad822002f3314501c5d9bb11f080fbb6c380cf8866\"" Jan 28 02:31:56.504348 systemd[1]: Started cri-containerd-b0ec6373539f4518ebb935ad822002f3314501c5d9bb11f080fbb6c380cf8866.scope - libcontainer container b0ec6373539f4518ebb935ad822002f3314501c5d9bb11f080fbb6c380cf8866. Jan 28 02:31:56.551655 containerd[1508]: time="2026-01-28T02:31:56.551590560Z" level=info msg="StartContainer for \"b0ec6373539f4518ebb935ad822002f3314501c5d9bb11f080fbb6c380cf8866\" returns successfully" Jan 28 02:31:56.857938 containerd[1508]: time="2026-01-28T02:31:56.857327451Z" level=info msg="StopPodSandbox for \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\"" Jan 28 02:31:56.857938 containerd[1508]: time="2026-01-28T02:31:56.857443975Z" level=info msg="StopPodSandbox for \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\"" Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:56.974 [INFO][4722] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:56.974 [INFO][4722] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" iface="eth0" netns="/var/run/netns/cni-c4b16e28-cf68-80f3-5dc8-d38945bba860" Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:56.975 [INFO][4722] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" iface="eth0" netns="/var/run/netns/cni-c4b16e28-cf68-80f3-5dc8-d38945bba860" Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:56.977 [INFO][4722] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" iface="eth0" netns="/var/run/netns/cni-c4b16e28-cf68-80f3-5dc8-d38945bba860" Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:56.977 [INFO][4722] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:56.977 [INFO][4722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:57.045 [INFO][4734] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" HandleID="k8s-pod-network.e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:57.045 [INFO][4734] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:57.045 [INFO][4734] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:57.061 [WARNING][4734] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" HandleID="k8s-pod-network.e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:57.061 [INFO][4734] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" HandleID="k8s-pod-network.e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:57.066 [INFO][4734] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:57.077938 containerd[1508]: 2026-01-28 02:31:57.072 [INFO][4722] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:31:57.078944 containerd[1508]: time="2026-01-28T02:31:57.078478539Z" level=info msg="TearDown network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\" successfully" Jan 28 02:31:57.078944 containerd[1508]: time="2026-01-28T02:31:57.078768602Z" level=info msg="StopPodSandbox for \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\" returns successfully" Jan 28 02:31:57.083320 containerd[1508]: time="2026-01-28T02:31:57.081192806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vjdx,Uid:7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5,Namespace:calico-system,Attempt:1,}" Jan 28 02:31:57.089419 systemd[1]: run-netns-cni\x2dc4b16e28\x2dcf68\x2d80f3\x2d5dc8\x2dd38945bba860.mount: Deactivated successfully. Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:56.993 [INFO][4717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:56.994 [INFO][4717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" iface="eth0" netns="/var/run/netns/cni-a62e1269-d3e4-a8a3-f703-83bcc6954afc" Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:56.994 [INFO][4717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" iface="eth0" netns="/var/run/netns/cni-a62e1269-d3e4-a8a3-f703-83bcc6954afc" Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:56.996 [INFO][4717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" iface="eth0" netns="/var/run/netns/cni-a62e1269-d3e4-a8a3-f703-83bcc6954afc" Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:56.996 [INFO][4717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:56.996 [INFO][4717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:57.063 [INFO][4740] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" HandleID="k8s-pod-network.e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:57.063 [INFO][4740] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:57.066 [INFO][4740] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:57.091 [WARNING][4740] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" HandleID="k8s-pod-network.e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:57.091 [INFO][4740] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" HandleID="k8s-pod-network.e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:57.095 [INFO][4740] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:57.109181 containerd[1508]: 2026-01-28 02:31:57.101 [INFO][4717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:31:57.109181 containerd[1508]: time="2026-01-28T02:31:57.107451148Z" level=info msg="TearDown network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\" successfully" Jan 28 02:31:57.109181 containerd[1508]: time="2026-01-28T02:31:57.107490116Z" level=info msg="StopPodSandbox for \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\" returns successfully" Jan 28 02:31:57.116282 containerd[1508]: time="2026-01-28T02:31:57.113617843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mqzws,Uid:cb78a5cb-4de2-4536-925b-fdddfbef361f,Namespace:kube-system,Attempt:1,}" Jan 28 02:31:57.119252 systemd[1]: run-netns-cni\x2da62e1269\x2dd3e4\x2da8a3\x2df703\x2d83bcc6954afc.mount: Deactivated successfully. Jan 28 02:31:57.324389 systemd-networkd[1438]: calidbd8c3a1dc7: Link UP Jan 28 02:31:57.326332 systemd-networkd[1438]: calidbd8c3a1dc7: Gained carrier Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.187 [INFO][4750] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0 csi-node-driver- calico-system 7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5 1058 0 2026-01-28 02:31:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-hg60y.gb1.brightbox.com csi-node-driver-9vjdx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidbd8c3a1dc7 [] [] }} ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Namespace="calico-system" Pod="csi-node-driver-9vjdx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.189 [INFO][4750] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Namespace="calico-system" Pod="csi-node-driver-9vjdx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.251 [INFO][4775] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" HandleID="k8s-pod-network.dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.253 [INFO][4775] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" HandleID="k8s-pod-network.dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hg60y.gb1.brightbox.com", "pod":"csi-node-driver-9vjdx", "timestamp":"2026-01-28 02:31:57.251555137 +0000 UTC"}, Hostname:"srv-hg60y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.253 [INFO][4775] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.253 [INFO][4775] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.253 [INFO][4775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hg60y.gb1.brightbox.com' Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.266 [INFO][4775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.274 [INFO][4775] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.281 [INFO][4775] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.284 [INFO][4775] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.289 [INFO][4775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.289 [INFO][4775] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.291 [INFO][4775] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.298 [INFO][4775] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.308 [INFO][4775] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.5/26] block=192.168.123.0/26 handle="k8s-pod-network.dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.308 [INFO][4775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.5/26] handle="k8s-pod-network.dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.308 [INFO][4775] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:57.363109 containerd[1508]: 2026-01-28 02:31:57.308 [INFO][4775] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.5/26] IPv6=[] ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" HandleID="k8s-pod-network.dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:31:57.364249 containerd[1508]: 2026-01-28 02:31:57.312 [INFO][4750] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Namespace="calico-system" Pod="csi-node-driver-9vjdx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-9vjdx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbd8c3a1dc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:57.364249 containerd[1508]: 2026-01-28 02:31:57.313 [INFO][4750] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.5/32] ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Namespace="calico-system" Pod="csi-node-driver-9vjdx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:31:57.364249 containerd[1508]: 2026-01-28 02:31:57.313 [INFO][4750] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbd8c3a1dc7 ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Namespace="calico-system" Pod="csi-node-driver-9vjdx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:31:57.364249 containerd[1508]: 2026-01-28 02:31:57.328 [INFO][4750] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Namespace="calico-system" Pod="csi-node-driver-9vjdx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:31:57.364249 containerd[1508]: 2026-01-28 02:31:57.329 [INFO][4750] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Namespace="calico-system" Pod="csi-node-driver-9vjdx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc", Pod:"csi-node-driver-9vjdx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbd8c3a1dc7", MAC:"0a:fa:e7:b9:db:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:57.364249 containerd[1508]: 2026-01-28 02:31:57.353 [INFO][4750] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc" Namespace="calico-system" Pod="csi-node-driver-9vjdx" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:31:57.464527 containerd[1508]: time="2026-01-28T02:31:57.464280033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:31:57.464527 containerd[1508]: time="2026-01-28T02:31:57.464666507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:31:57.465570 containerd[1508]: time="2026-01-28T02:31:57.464703630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:57.468523 containerd[1508]: time="2026-01-28T02:31:57.466249577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:57.486178 kubelet[2700]: I0128 02:31:57.486057 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-b4mnx" podStartSLOduration=72.486012849 podStartE2EDuration="1m12.486012849s" podCreationTimestamp="2026-01-28 02:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:31:57.438079782 +0000 UTC m=+78.784627317" watchObservedRunningTime="2026-01-28 02:31:57.486012849 +0000 UTC m=+78.832560389" Jan 28 02:31:57.509395 systemd[1]: Started cri-containerd-dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc.scope - libcontainer container dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc. Jan 28 02:31:57.550020 systemd-networkd[1438]: cali9bf3cacf9a9: Link UP Jan 28 02:31:57.554663 systemd-networkd[1438]: cali9bf3cacf9a9: Gained carrier Jan 28 02:31:57.574467 systemd-networkd[1438]: cali984c4478d11: Gained IPv6LL Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.216 [INFO][4760] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0 coredns-668d6bf9bc- kube-system cb78a5cb-4de2-4536-925b-fdddfbef361f 1059 0 2026-01-28 02:30:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-hg60y.gb1.brightbox.com coredns-668d6bf9bc-mqzws eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9bf3cacf9a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqzws" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.216 [INFO][4760] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqzws" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.278 [INFO][4780] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" HandleID="k8s-pod-network.62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.279 [INFO][4780] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" HandleID="k8s-pod-network.62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-hg60y.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-mqzws", "timestamp":"2026-01-28 02:31:57.278574416 +0000 UTC"}, Hostname:"srv-hg60y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.279 [INFO][4780] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.308 [INFO][4780] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.309 [INFO][4780] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hg60y.gb1.brightbox.com' Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.368 [INFO][4780] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.412 [INFO][4780] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.478 [INFO][4780] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.488 [INFO][4780] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.499 [INFO][4780] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.499 [INFO][4780] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.507 [INFO][4780] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06 Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.517 [INFO][4780] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.531 [INFO][4780] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.6/26] block=192.168.123.0/26 handle="k8s-pod-network.62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.531 [INFO][4780] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.6/26] handle="k8s-pod-network.62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.532 [INFO][4780] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:57.592859 containerd[1508]: 2026-01-28 02:31:57.532 [INFO][4780] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.6/26] IPv6=[] ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" HandleID="k8s-pod-network.62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:31:57.596538 containerd[1508]: 2026-01-28 02:31:57.536 [INFO][4760] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqzws" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cb78a5cb-4de2-4536-925b-fdddfbef361f", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-mqzws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bf3cacf9a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:57.596538 containerd[1508]: 2026-01-28 02:31:57.536 [INFO][4760] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.6/32] ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqzws" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:31:57.596538 containerd[1508]: 2026-01-28 02:31:57.536 [INFO][4760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bf3cacf9a9 ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqzws" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:31:57.596538 containerd[1508]: 2026-01-28 02:31:57.558 [INFO][4760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqzws" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:31:57.596538 containerd[1508]: 2026-01-28 02:31:57.558 [INFO][4760] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqzws" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cb78a5cb-4de2-4536-925b-fdddfbef361f", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06", Pod:"coredns-668d6bf9bc-mqzws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bf3cacf9a9", MAC:"e6:b9:39:2c:95:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:57.596538 containerd[1508]: 2026-01-28 02:31:57.587 [INFO][4760] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06" Namespace="kube-system" Pod="coredns-668d6bf9bc-mqzws" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:31:57.611709 containerd[1508]: time="2026-01-28T02:31:57.611660821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vjdx,Uid:7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5,Namespace:calico-system,Attempt:1,} returns sandbox id \"dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc\"" Jan 28 02:31:57.615998 containerd[1508]: time="2026-01-28T02:31:57.615839102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 02:31:57.641230 containerd[1508]: time="2026-01-28T02:31:57.640700073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:31:57.641230 containerd[1508]: time="2026-01-28T02:31:57.640792884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:31:57.641230 containerd[1508]: time="2026-01-28T02:31:57.640814725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:57.641230 containerd[1508]: time="2026-01-28T02:31:57.640925850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:57.685179 systemd[1]: Started cri-containerd-62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06.scope - libcontainer container 62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06. Jan 28 02:31:57.752263 containerd[1508]: time="2026-01-28T02:31:57.752098625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mqzws,Uid:cb78a5cb-4de2-4536-925b-fdddfbef361f,Namespace:kube-system,Attempt:1,} returns sandbox id \"62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06\"" Jan 28 02:31:57.757904 containerd[1508]: time="2026-01-28T02:31:57.757740965Z" level=info msg="CreateContainer within sandbox \"62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 02:31:57.773468 containerd[1508]: time="2026-01-28T02:31:57.773410206Z" level=info msg="CreateContainer within sandbox \"62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ece4895642a500da5ed179642ef7e10f0fe14de2178272b93dc667619711036a\"" Jan 28 02:31:57.775581 containerd[1508]: time="2026-01-28T02:31:57.774407686Z" level=info msg="StartContainer for \"ece4895642a500da5ed179642ef7e10f0fe14de2178272b93dc667619711036a\"" Jan 28 02:31:57.810368 systemd[1]: Started cri-containerd-ece4895642a500da5ed179642ef7e10f0fe14de2178272b93dc667619711036a.scope - libcontainer container ece4895642a500da5ed179642ef7e10f0fe14de2178272b93dc667619711036a. Jan 28 02:31:57.852287 containerd[1508]: time="2026-01-28T02:31:57.852240010Z" level=info msg="StartContainer for \"ece4895642a500da5ed179642ef7e10f0fe14de2178272b93dc667619711036a\" returns successfully" Jan 28 02:31:57.854058 containerd[1508]: time="2026-01-28T02:31:57.854022395Z" level=info msg="StopPodSandbox for \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\"" Jan 28 02:31:57.946996 containerd[1508]: time="2026-01-28T02:31:57.946288473Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:31:57.948393 containerd[1508]: time="2026-01-28T02:31:57.948186146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 02:31:57.948393 containerd[1508]: time="2026-01-28T02:31:57.948304571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 02:31:57.950525 kubelet[2700]: E0128 02:31:57.948801 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:31:57.950525 kubelet[2700]: E0128 02:31:57.949033 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:31:57.950525 kubelet[2700]: E0128 02:31:57.949291 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66wbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vjdx_calico-system(7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 02:31:57.954887 containerd[1508]: time="2026-01-28T02:31:57.954837996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:57.975 [INFO][4941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:57.976 [INFO][4941] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" iface="eth0" netns="/var/run/netns/cni-c230a234-c60d-c2ba-6979-56a9b11d9b8d" Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:57.976 [INFO][4941] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" iface="eth0" netns="/var/run/netns/cni-c230a234-c60d-c2ba-6979-56a9b11d9b8d" Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:57.976 [INFO][4941] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" iface="eth0" netns="/var/run/netns/cni-c230a234-c60d-c2ba-6979-56a9b11d9b8d" Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:57.977 [INFO][4941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:57.977 [INFO][4941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:58.024 [INFO][4952] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" HandleID="k8s-pod-network.7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:58.024 [INFO][4952] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:58.024 [INFO][4952] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:58.034 [WARNING][4952] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" HandleID="k8s-pod-network.7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:58.034 [INFO][4952] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" HandleID="k8s-pod-network.7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:58.038 [INFO][4952] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:58.042996 containerd[1508]: 2026-01-28 02:31:58.040 [INFO][4941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:31:58.045315 containerd[1508]: time="2026-01-28T02:31:58.045273521Z" level=info msg="TearDown network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\" successfully" Jan 28 02:31:58.045422 containerd[1508]: time="2026-01-28T02:31:58.045315227Z" level=info msg="StopPodSandbox for \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\" returns successfully" Jan 28 02:31:58.046228 containerd[1508]: time="2026-01-28T02:31:58.046184372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bvmzd,Uid:345054d8-51ec-4ec2-90c7-329ebe97ba46,Namespace:calico-system,Attempt:1,}" Jan 28 02:31:58.048924 systemd[1]: run-netns-cni\x2dc230a234\x2dc60d\x2dc2ba\x2d6979\x2d56a9b11d9b8d.mount: Deactivated successfully. Jan 28 02:31:58.237341 systemd-networkd[1438]: cali636de0fa910: Link UP Jan 28 02:31:58.239822 systemd-networkd[1438]: cali636de0fa910: Gained carrier Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.130 [INFO][4958] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0 goldmane-666569f655- calico-system 345054d8-51ec-4ec2-90c7-329ebe97ba46 1084 0 2026-01-28 02:31:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-hg60y.gb1.brightbox.com goldmane-666569f655-bvmzd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali636de0fa910 [] [] }} ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Namespace="calico-system" Pod="goldmane-666569f655-bvmzd" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.130 [INFO][4958] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Namespace="calico-system" Pod="goldmane-666569f655-bvmzd" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.173 [INFO][4970] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" HandleID="k8s-pod-network.b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.173 [INFO][4970] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" HandleID="k8s-pod-network.b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f840), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hg60y.gb1.brightbox.com", "pod":"goldmane-666569f655-bvmzd", "timestamp":"2026-01-28 02:31:58.173623826 +0000 UTC"}, Hostname:"srv-hg60y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.173 [INFO][4970] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.173 [INFO][4970] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.174 [INFO][4970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hg60y.gb1.brightbox.com' Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.184 [INFO][4970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.194 [INFO][4970] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.201 [INFO][4970] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.204 [INFO][4970] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.208 [INFO][4970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.208 [INFO][4970] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.210 [INFO][4970] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39 Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.217 [INFO][4970] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.227 [INFO][4970] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.7/26] block=192.168.123.0/26 handle="k8s-pod-network.b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.227 [INFO][4970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.7/26] handle="k8s-pod-network.b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.227 [INFO][4970] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:58.266872 containerd[1508]: 2026-01-28 02:31:58.227 [INFO][4970] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.7/26] IPv6=[] ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" HandleID="k8s-pod-network.b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:31:58.268925 containerd[1508]: 2026-01-28 02:31:58.231 [INFO][4958] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Namespace="calico-system" Pod="goldmane-666569f655-bvmzd" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"345054d8-51ec-4ec2-90c7-329ebe97ba46", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-bvmzd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali636de0fa910", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:58.268925 containerd[1508]: 2026-01-28 02:31:58.231 [INFO][4958] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.7/32] ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Namespace="calico-system" Pod="goldmane-666569f655-bvmzd" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:31:58.268925 containerd[1508]: 2026-01-28 02:31:58.231 [INFO][4958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali636de0fa910 ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Namespace="calico-system" Pod="goldmane-666569f655-bvmzd" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:31:58.268925 containerd[1508]: 2026-01-28 02:31:58.239 [INFO][4958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Namespace="calico-system" Pod="goldmane-666569f655-bvmzd" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:31:58.268925 containerd[1508]: 2026-01-28 02:31:58.240 [INFO][4958] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Namespace="calico-system" Pod="goldmane-666569f655-bvmzd" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"345054d8-51ec-4ec2-90c7-329ebe97ba46", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39", Pod:"goldmane-666569f655-bvmzd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali636de0fa910", MAC:"2a:9e:63:0a:8c:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:58.268925 containerd[1508]: 2026-01-28 02:31:58.261 [INFO][4958] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39" Namespace="calico-system" Pod="goldmane-666569f655-bvmzd" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:31:58.291768 containerd[1508]: time="2026-01-28T02:31:58.291577363Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:31:58.294393 containerd[1508]: time="2026-01-28T02:31:58.294320949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 02:31:58.294865 containerd[1508]: time="2026-01-28T02:31:58.294532089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 02:31:58.295314 kubelet[2700]: E0128 02:31:58.295237 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:31:58.295314 kubelet[2700]: E0128 02:31:58.295305 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:31:58.295569 kubelet[2700]: E0128 02:31:58.295470 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66wbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vjdx_calico-system(7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 02:31:58.297919 kubelet[2700]: E0128 02:31:58.297857 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:58.310291 containerd[1508]: time="2026-01-28T02:31:58.308813738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:31:58.310719 containerd[1508]: time="2026-01-28T02:31:58.310306154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:31:58.311403 containerd[1508]: time="2026-01-28T02:31:58.310675340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:58.311403 containerd[1508]: time="2026-01-28T02:31:58.310828676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:58.351410 systemd[1]: Started cri-containerd-b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39.scope - libcontainer container b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39. Jan 28 02:31:58.366791 kubelet[2700]: E0128 02:31:58.366676 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:58.381270 kubelet[2700]: I0128 02:31:58.380980 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mqzws" podStartSLOduration=73.380938982 podStartE2EDuration="1m13.380938982s" podCreationTimestamp="2026-01-28 02:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:31:58.379037521 +0000 UTC m=+79.725585060" watchObservedRunningTime="2026-01-28 02:31:58.380938982 +0000 UTC m=+79.727486516" Jan 28 02:31:58.455792 containerd[1508]: time="2026-01-28T02:31:58.455725384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bvmzd,Uid:345054d8-51ec-4ec2-90c7-329ebe97ba46,Namespace:calico-system,Attempt:1,} returns sandbox id \"b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39\"" Jan 28 02:31:58.458356 containerd[1508]: time="2026-01-28T02:31:58.458307937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:31:58.785882 containerd[1508]: time="2026-01-28T02:31:58.785778315Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:31:58.808260 containerd[1508]: time="2026-01-28T02:31:58.808163853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:31:58.808575 containerd[1508]: time="2026-01-28T02:31:58.808413534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 02:31:58.808785 kubelet[2700]: E0128 02:31:58.808728 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:31:58.809413 kubelet[2700]: E0128 02:31:58.808802 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:31:58.809413 kubelet[2700]: E0128 02:31:58.809017 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8zps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bvmzd_calico-system(345054d8-51ec-4ec2-90c7-329ebe97ba46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:31:58.811270 kubelet[2700]: E0128 02:31:58.810529 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:31:58.857367 containerd[1508]: time="2026-01-28T02:31:58.857301897Z" level=info msg="StopPodSandbox for \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\"" Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:58.952 [INFO][5041] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:58.955 [INFO][5041] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" iface="eth0" netns="/var/run/netns/cni-98c038fc-1af9-e0a8-d993-4fbbd14bbb92" Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:58.956 [INFO][5041] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" iface="eth0" netns="/var/run/netns/cni-98c038fc-1af9-e0a8-d993-4fbbd14bbb92" Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:58.956 [INFO][5041] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" iface="eth0" netns="/var/run/netns/cni-98c038fc-1af9-e0a8-d993-4fbbd14bbb92" Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:58.957 [INFO][5041] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:58.957 [INFO][5041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:59.009 [INFO][5049] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" HandleID="k8s-pod-network.eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:59.010 [INFO][5049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:59.010 [INFO][5049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:59.024 [WARNING][5049] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" HandleID="k8s-pod-network.eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:59.024 [INFO][5049] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" HandleID="k8s-pod-network.eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:59.027 [INFO][5049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:59.035180 containerd[1508]: 2026-01-28 02:31:59.030 [INFO][5041] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:31:59.038552 containerd[1508]: time="2026-01-28T02:31:59.033128812Z" level=info msg="TearDown network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\" successfully" Jan 28 02:31:59.038552 containerd[1508]: time="2026-01-28T02:31:59.036277779Z" level=info msg="StopPodSandbox for \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\" returns successfully" Jan 28 02:31:59.040888 systemd[1]: run-netns-cni\x2d98c038fc\x2d1af9\x2de0a8\x2dd993\x2d4fbbd14bbb92.mount: Deactivated successfully. Jan 28 02:31:59.042801 containerd[1508]: time="2026-01-28T02:31:59.041250530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7866ff566b-tbgpj,Uid:cd42b56d-5021-410e-8408-e15b3c52f065,Namespace:calico-apiserver,Attempt:1,}" Jan 28 02:31:59.266520 systemd-networkd[1438]: calif555d5a8604: Link UP Jan 28 02:31:59.268738 systemd-networkd[1438]: calif555d5a8604: Gained carrier Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.134 [INFO][5059] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0 calico-apiserver-7866ff566b- calico-apiserver cd42b56d-5021-410e-8408-e15b3c52f065 1109 0 2026-01-28 02:31:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7866ff566b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-hg60y.gb1.brightbox.com calico-apiserver-7866ff566b-tbgpj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif555d5a8604 [] [] }} ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-tbgpj" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.134 [INFO][5059] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-tbgpj" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.189 [INFO][5074] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" HandleID="k8s-pod-network.1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.189 [INFO][5074] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" HandleID="k8s-pod-network.1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-hg60y.gb1.brightbox.com", "pod":"calico-apiserver-7866ff566b-tbgpj", "timestamp":"2026-01-28 02:31:59.1892492 +0000 UTC"}, Hostname:"srv-hg60y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.189 [INFO][5074] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.189 [INFO][5074] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.190 [INFO][5074] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hg60y.gb1.brightbox.com' Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.202 [INFO][5074] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.214 [INFO][5074] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.221 [INFO][5074] ipam/ipam.go 511: Trying affinity for 192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.223 [INFO][5074] ipam/ipam.go 158: Attempting to load block cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.230 [INFO][5074] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.123.0/26 host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.230 [INFO][5074] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.123.0/26 handle="k8s-pod-network.1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.232 [INFO][5074] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26 Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.240 [INFO][5074] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.123.0/26 handle="k8s-pod-network.1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.256 [INFO][5074] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.123.8/26] block=192.168.123.0/26 handle="k8s-pod-network.1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.256 [INFO][5074] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.123.8/26] handle="k8s-pod-network.1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" host="srv-hg60y.gb1.brightbox.com" Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.256 [INFO][5074] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:31:59.293992 containerd[1508]: 2026-01-28 02:31:59.257 [INFO][5074] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.123.8/26] IPv6=[] ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" HandleID="k8s-pod-network.1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:31:59.295258 containerd[1508]: 2026-01-28 02:31:59.259 [INFO][5059] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-tbgpj" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0", GenerateName:"calico-apiserver-7866ff566b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd42b56d-5021-410e-8408-e15b3c52f065", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7866ff566b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7866ff566b-tbgpj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif555d5a8604", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:59.295258 containerd[1508]: 2026-01-28 02:31:59.260 [INFO][5059] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.123.8/32] ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-tbgpj" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:31:59.295258 containerd[1508]: 2026-01-28 02:31:59.260 [INFO][5059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif555d5a8604 ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-tbgpj" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:31:59.295258 containerd[1508]: 2026-01-28 02:31:59.264 [INFO][5059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-tbgpj" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:31:59.295258 containerd[1508]: 2026-01-28 02:31:59.266 [INFO][5059] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-tbgpj" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0", GenerateName:"calico-apiserver-7866ff566b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd42b56d-5021-410e-8408-e15b3c52f065", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7866ff566b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26", Pod:"calico-apiserver-7866ff566b-tbgpj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif555d5a8604", MAC:"b2:a6:7d:8f:39:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:31:59.295258 containerd[1508]: 2026-01-28 02:31:59.290 [INFO][5059] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26" Namespace="calico-apiserver" Pod="calico-apiserver-7866ff566b-tbgpj" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:31:59.302571 systemd-networkd[1438]: calidbd8c3a1dc7: Gained IPv6LL Jan 28 02:31:59.329114 containerd[1508]: time="2026-01-28T02:31:59.328891422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:31:59.330742 containerd[1508]: time="2026-01-28T02:31:59.330406119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:31:59.330742 containerd[1508]: time="2026-01-28T02:31:59.330452343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:59.334166 containerd[1508]: time="2026-01-28T02:31:59.330612743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:31:59.371335 systemd[1]: Started cri-containerd-1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26.scope - libcontainer container 1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26. Jan 28 02:31:59.378030 kubelet[2700]: E0128 02:31:59.376410 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:31:59.383976 kubelet[2700]: E0128 02:31:59.383904 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:31:59.470476 containerd[1508]: time="2026-01-28T02:31:59.470416181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7866ff566b-tbgpj,Uid:cd42b56d-5021-410e-8408-e15b3c52f065,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26\"" Jan 28 02:31:59.473854 containerd[1508]: time="2026-01-28T02:31:59.473404674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:31:59.494392 systemd-networkd[1438]: cali9bf3cacf9a9: Gained IPv6LL Jan 28 02:31:59.782043 containerd[1508]: time="2026-01-28T02:31:59.781551040Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:31:59.783165 containerd[1508]: time="2026-01-28T02:31:59.783113132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:31:59.783886 containerd[1508]: time="2026-01-28T02:31:59.783208217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:31:59.784207 kubelet[2700]: E0128 02:31:59.784110 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:31:59.784207 kubelet[2700]: E0128 02:31:59.784187 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:31:59.784415 kubelet[2700]: E0128 02:31:59.784351 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzjjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7866ff566b-tbgpj_calico-apiserver(cd42b56d-5021-410e-8408-e15b3c52f065): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:31:59.785635 kubelet[2700]: E0128 02:31:59.785590 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:31:59.878549 systemd-networkd[1438]: cali636de0fa910: Gained IPv6LL Jan 28 02:32:00.379217 kubelet[2700]: E0128 02:32:00.379099 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:32:00.380442 kubelet[2700]: E0128 02:32:00.379225 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:32:01.286716 systemd-networkd[1438]: calif555d5a8604: Gained IPv6LL Jan 28 02:32:01.381033 kubelet[2700]: E0128 02:32:01.380502 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:32:01.858322 containerd[1508]: time="2026-01-28T02:32:01.857201068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:32:02.175123 containerd[1508]: time="2026-01-28T02:32:02.174659187Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:02.176919 containerd[1508]: time="2026-01-28T02:32:02.176858570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:32:02.177205 containerd[1508]: time="2026-01-28T02:32:02.177088690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 02:32:02.177892 kubelet[2700]: E0128 02:32:02.177585 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:32:02.177892 kubelet[2700]: E0128 02:32:02.177662 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:32:02.178108 kubelet[2700]: E0128 02:32:02.177925 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:50e6dbe8f85b451e9c2d8f88eee475c8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd7f4d764-szmpb_calico-system(9d9dfd5e-a429-4d13-9ec6-6f8b582ac456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:02.179799 containerd[1508]: time="2026-01-28T02:32:02.178893707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:32:02.503656 containerd[1508]: time="2026-01-28T02:32:02.503583364Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:02.505133 containerd[1508]: time="2026-01-28T02:32:02.504966347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:32:02.505133 containerd[1508]: time="2026-01-28T02:32:02.505039952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:32:02.505336 kubelet[2700]: E0128 02:32:02.505294 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:32:02.506887 kubelet[2700]: E0128 02:32:02.505395 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:32:02.506887 kubelet[2700]: E0128 02:32:02.505847 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5szp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7866ff566b-wrtzl_calico-apiserver(0a6be4a3-a931-4bdf-98fa-3be5929a5064): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:02.507447 containerd[1508]: time="2026-01-28T02:32:02.505979053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:32:02.507678 kubelet[2700]: E0128 02:32:02.507103 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:32:02.823746 containerd[1508]: time="2026-01-28T02:32:02.822905788Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:02.825623 containerd[1508]: time="2026-01-28T02:32:02.825447332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:32:02.825623 containerd[1508]: time="2026-01-28T02:32:02.825505908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 02:32:02.826029 kubelet[2700]: E0128 02:32:02.825959 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:32:02.826128 kubelet[2700]: E0128 02:32:02.826040 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:32:02.826625 kubelet[2700]: E0128 02:32:02.826224 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd7f4d764-szmpb_calico-system(9d9dfd5e-a429-4d13-9ec6-6f8b582ac456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:02.827623 kubelet[2700]: E0128 02:32:02.827569 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd7f4d764-szmpb" podUID="9d9dfd5e-a429-4d13-9ec6-6f8b582ac456" Jan 28 02:32:10.859519 containerd[1508]: time="2026-01-28T02:32:10.859019760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 02:32:11.177840 containerd[1508]: time="2026-01-28T02:32:11.177370367Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:11.178645 containerd[1508]: time="2026-01-28T02:32:11.178583732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 02:32:11.178930 containerd[1508]: time="2026-01-28T02:32:11.178713505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 02:32:11.179111 kubelet[2700]: E0128 02:32:11.179031 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:32:11.179815 kubelet[2700]: E0128 02:32:11.179163 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:32:11.179815 kubelet[2700]: E0128 02:32:11.179421 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5mfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-858bccccf6-bqm86_calico-system(19aa6a03-3b76-49c3-840d-da43872b111b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:11.181078 kubelet[2700]: E0128 02:32:11.181035 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:32:12.857732 containerd[1508]: time="2026-01-28T02:32:12.857452224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:32:13.177932 containerd[1508]: time="2026-01-28T02:32:13.177655323Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:13.179606 containerd[1508]: time="2026-01-28T02:32:13.179482231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:32:13.179606 containerd[1508]: time="2026-01-28T02:32:13.179530352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:32:13.180056 kubelet[2700]: E0128 02:32:13.179982 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:32:13.180612 kubelet[2700]: E0128 02:32:13.180074 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:32:13.180612 kubelet[2700]: E0128 02:32:13.180529 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzjjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7866ff566b-tbgpj_calico-apiserver(cd42b56d-5021-410e-8408-e15b3c52f065): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:13.182806 containerd[1508]: time="2026-01-28T02:32:13.182767473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 02:32:13.184520 kubelet[2700]: E0128 02:32:13.184480 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:32:13.221634 systemd[1]: Started sshd@9-10.230.34.254:22-68.220.241.50:48468.service - OpenSSH per-connection server daemon (68.220.241.50:48468). Jan 28 02:32:13.504413 containerd[1508]: time="2026-01-28T02:32:13.504133695Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:13.505303 containerd[1508]: time="2026-01-28T02:32:13.505242680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 02:32:13.505670 containerd[1508]: time="2026-01-28T02:32:13.505285177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 02:32:13.505790 kubelet[2700]: E0128 02:32:13.505720 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:32:13.505899 kubelet[2700]: E0128 02:32:13.505804 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:32:13.507732 kubelet[2700]: E0128 02:32:13.506132 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66wbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vjdx_calico-system(7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:13.507882 containerd[1508]: time="2026-01-28T02:32:13.506260599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:32:13.832650 containerd[1508]: time="2026-01-28T02:32:13.832496451Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:13.833649 containerd[1508]: time="2026-01-28T02:32:13.833594889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:32:13.833751 containerd[1508]: time="2026-01-28T02:32:13.833696238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 02:32:13.836167 kubelet[2700]: E0128 02:32:13.834025 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:32:13.836167 kubelet[2700]: E0128 02:32:13.834106 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:32:13.836167 kubelet[2700]: E0128 02:32:13.834438 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8zps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bvmzd_calico-system(345054d8-51ec-4ec2-90c7-329ebe97ba46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:13.836483 containerd[1508]: time="2026-01-28T02:32:13.835089198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 02:32:13.836815 kubelet[2700]: E0128 02:32:13.836716 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:32:13.846173 sshd[5148]: Accepted publickey for core from 68.220.241.50 port 48468 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:32:13.847536 sshd[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:32:13.864006 systemd-logind[1488]: New session 12 of user core. Jan 28 02:32:13.873386 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 02:32:14.150569 containerd[1508]: time="2026-01-28T02:32:14.149974914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:14.152974 containerd[1508]: time="2026-01-28T02:32:14.152580142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 02:32:14.152974 containerd[1508]: time="2026-01-28T02:32:14.152704755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 02:32:14.156121 kubelet[2700]: E0128 02:32:14.155351 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:32:14.156121 kubelet[2700]: E0128 02:32:14.155434 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:32:14.156121 kubelet[2700]: E0128 02:32:14.155616 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66wbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vjdx_calico-system(7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:14.157732 kubelet[2700]: E0128 02:32:14.157681 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:32:14.852667 sshd[5148]: pam_unix(sshd:session): session closed for user core Jan 28 02:32:14.858770 systemd[1]: sshd@9-10.230.34.254:22-68.220.241.50:48468.service: Deactivated successfully. Jan 28 02:32:14.868069 kubelet[2700]: E0128 02:32:14.867964 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd7f4d764-szmpb" podUID="9d9dfd5e-a429-4d13-9ec6-6f8b582ac456" Jan 28 02:32:14.869602 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 02:32:14.874535 systemd-logind[1488]: Session 12 logged out. Waiting for processes to exit. Jan 28 02:32:14.877439 systemd-logind[1488]: Removed session 12. Jan 28 02:32:15.856755 kubelet[2700]: E0128 02:32:15.856642 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:32:19.960568 systemd[1]: Started sshd@10-10.230.34.254:22-68.220.241.50:48482.service - OpenSSH per-connection server daemon (68.220.241.50:48482). Jan 28 02:32:20.600025 sshd[5194]: Accepted publickey for core from 68.220.241.50 port 48482 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:32:20.602481 sshd[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:32:20.611887 systemd-logind[1488]: New session 13 of user core. Jan 28 02:32:20.617427 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 02:32:21.343619 sshd[5194]: pam_unix(sshd:session): session closed for user core Jan 28 02:32:21.348081 systemd-logind[1488]: Session 13 logged out. Waiting for processes to exit. Jan 28 02:32:21.348638 systemd[1]: sshd@10-10.230.34.254:22-68.220.241.50:48482.service: Deactivated successfully. Jan 28 02:32:21.351272 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 02:32:21.353768 systemd-logind[1488]: Removed session 13. Jan 28 02:32:23.856473 kubelet[2700]: E0128 02:32:23.856381 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:32:24.858699 kubelet[2700]: E0128 02:32:24.858357 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:32:24.868418 kubelet[2700]: E0128 02:32:24.868347 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:32:26.449484 systemd[1]: Started sshd@11-10.230.34.254:22-68.220.241.50:40700.service - OpenSSH per-connection server daemon (68.220.241.50:40700). Jan 28 02:32:27.026782 sshd[5208]: Accepted publickey for core from 68.220.241.50 port 40700 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:32:27.029504 sshd[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:32:27.037822 systemd-logind[1488]: New session 14 of user core. Jan 28 02:32:27.043388 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 02:32:27.535681 sshd[5208]: pam_unix(sshd:session): session closed for user core Jan 28 02:32:27.542227 systemd[1]: sshd@11-10.230.34.254:22-68.220.241.50:40700.service: Deactivated successfully. Jan 28 02:32:27.545297 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 02:32:27.547077 systemd-logind[1488]: Session 14 logged out. Waiting for processes to exit. Jan 28 02:32:27.549008 systemd-logind[1488]: Removed session 14. Jan 28 02:32:27.856404 kubelet[2700]: E0128 02:32:27.856216 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:32:27.859387 containerd[1508]: time="2026-01-28T02:32:27.859299793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:32:28.191613 containerd[1508]: time="2026-01-28T02:32:28.191431532Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:28.193891 containerd[1508]: time="2026-01-28T02:32:28.193623679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:32:28.193891 containerd[1508]: time="2026-01-28T02:32:28.193793791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:32:28.194212 kubelet[2700]: E0128 02:32:28.194085 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:32:28.194376 kubelet[2700]: E0128 02:32:28.194227 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:32:28.195667 kubelet[2700]: E0128 02:32:28.194509 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5szp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7866ff566b-wrtzl_calico-apiserver(0a6be4a3-a931-4bdf-98fa-3be5929a5064): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:28.196000 kubelet[2700]: E0128 02:32:28.195930 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:32:29.857068 containerd[1508]: time="2026-01-28T02:32:29.856715746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:32:30.041192 update_engine[1489]: I20260128 02:32:30.041038 1489 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 02:32:30.041192 update_engine[1489]: I20260128 02:32:30.041177 1489 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 02:32:30.043688 update_engine[1489]: I20260128 02:32:30.043612 1489 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 02:32:30.045078 update_engine[1489]: I20260128 02:32:30.045014 1489 omaha_request_params.cc:62] Current group set to lts Jan 28 02:32:30.045319 update_engine[1489]: I20260128 02:32:30.045275 1489 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 02:32:30.045319 update_engine[1489]: I20260128 02:32:30.045306 1489 update_attempter.cc:643] Scheduling an action processor start. Jan 28 02:32:30.045461 update_engine[1489]: I20260128 02:32:30.045341 1489 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 02:32:30.045461 update_engine[1489]: I20260128 02:32:30.045420 1489 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 02:32:30.045567 update_engine[1489]: I20260128 02:32:30.045535 1489 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 02:32:30.045632 update_engine[1489]: I20260128 02:32:30.045562 1489 omaha_request_action.cc:272] Request: Jan 28 02:32:30.045632 update_engine[1489]: Jan 28 02:32:30.045632 update_engine[1489]: Jan 28 02:32:30.045632 update_engine[1489]: Jan 28 02:32:30.045632 update_engine[1489]: Jan 28 02:32:30.045632 update_engine[1489]: Jan 28 02:32:30.045632 update_engine[1489]: Jan 28 02:32:30.045632 update_engine[1489]: Jan 28 02:32:30.045632 update_engine[1489]: Jan 28 02:32:30.045632 update_engine[1489]: I20260128 02:32:30.045583 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 02:32:30.060490 update_engine[1489]: I20260128 02:32:30.058820 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 02:32:30.060490 update_engine[1489]: I20260128 02:32:30.059294 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 02:32:30.070290 update_engine[1489]: E20260128 02:32:30.070092 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 02:32:30.070290 update_engine[1489]: I20260128 02:32:30.070242 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 02:32:30.080452 locksmithd[1529]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 02:32:30.188017 containerd[1508]: time="2026-01-28T02:32:30.187764740Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:30.189954 containerd[1508]: time="2026-01-28T02:32:30.189863117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:32:30.190094 containerd[1508]: time="2026-01-28T02:32:30.189999236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 02:32:30.190441 kubelet[2700]: E0128 02:32:30.190352 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:32:30.190923 kubelet[2700]: E0128 02:32:30.190462 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:32:30.190923 kubelet[2700]: E0128 02:32:30.190636 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:50e6dbe8f85b451e9c2d8f88eee475c8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd7f4d764-szmpb_calico-system(9d9dfd5e-a429-4d13-9ec6-6f8b582ac456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:30.194087 containerd[1508]: time="2026-01-28T02:32:30.193522698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:32:30.514399 containerd[1508]: time="2026-01-28T02:32:30.514319872Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:30.515534 containerd[1508]: time="2026-01-28T02:32:30.515471572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:32:30.515641 containerd[1508]: time="2026-01-28T02:32:30.515601418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 02:32:30.515881 kubelet[2700]: E0128 02:32:30.515806 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:32:30.516005 kubelet[2700]: E0128 02:32:30.515889 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:32:30.520731 kubelet[2700]: E0128 02:32:30.520080 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd7f4d764-szmpb_calico-system(9d9dfd5e-a429-4d13-9ec6-6f8b582ac456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:30.522257 kubelet[2700]: E0128 02:32:30.522123 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd7f4d764-szmpb" podUID="9d9dfd5e-a429-4d13-9ec6-6f8b582ac456" Jan 28 02:32:32.651533 systemd[1]: Started sshd@12-10.230.34.254:22-68.220.241.50:38978.service - OpenSSH per-connection server daemon (68.220.241.50:38978). Jan 28 02:32:33.260798 sshd[5230]: Accepted publickey for core from 68.220.241.50 port 38978 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:32:33.263621 sshd[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:32:33.271421 systemd-logind[1488]: New session 15 of user core. Jan 28 02:32:33.284455 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 02:32:33.797331 sshd[5230]: pam_unix(sshd:session): session closed for user core Jan 28 02:32:33.802181 systemd[1]: sshd@12-10.230.34.254:22-68.220.241.50:38978.service: Deactivated successfully. Jan 28 02:32:33.804613 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 02:32:33.806514 systemd-logind[1488]: Session 15 logged out. Waiting for processes to exit. Jan 28 02:32:33.808536 systemd-logind[1488]: Removed session 15. Jan 28 02:32:33.902598 systemd[1]: Started sshd@13-10.230.34.254:22-68.220.241.50:38986.service - OpenSSH per-connection server daemon (68.220.241.50:38986). Jan 28 02:32:34.500097 sshd[5243]: Accepted publickey for core from 68.220.241.50 port 38986 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:32:34.503356 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:32:34.510555 systemd-logind[1488]: New session 16 of user core. Jan 28 02:32:34.516395 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 02:32:34.860880 containerd[1508]: time="2026-01-28T02:32:34.858957980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 02:32:35.146199 sshd[5243]: pam_unix(sshd:session): session closed for user core Jan 28 02:32:35.151628 systemd[1]: sshd@13-10.230.34.254:22-68.220.241.50:38986.service: Deactivated successfully. Jan 28 02:32:35.154983 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 02:32:35.159135 systemd-logind[1488]: Session 16 logged out. Waiting for processes to exit. Jan 28 02:32:35.161037 systemd-logind[1488]: Removed session 16. Jan 28 02:32:35.190656 containerd[1508]: time="2026-01-28T02:32:35.190528196Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:35.192260 containerd[1508]: time="2026-01-28T02:32:35.192168818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 02:32:35.192406 containerd[1508]: time="2026-01-28T02:32:35.192185557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 02:32:35.192714 kubelet[2700]: E0128 02:32:35.192632 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:32:35.193308 kubelet[2700]: E0128 02:32:35.192742 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:32:35.193308 kubelet[2700]: E0128 02:32:35.193017 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5mfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-858bccccf6-bqm86_calico-system(19aa6a03-3b76-49c3-840d-da43872b111b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:35.194842 kubelet[2700]: E0128 02:32:35.194764 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:32:35.255557 systemd[1]: Started sshd@14-10.230.34.254:22-68.220.241.50:39002.service - OpenSSH per-connection server daemon (68.220.241.50:39002). Jan 28 02:32:35.831578 sshd[5253]: Accepted publickey for core from 68.220.241.50 port 39002 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:32:35.833835 sshd[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:32:35.841140 systemd-logind[1488]: New session 17 of user core. Jan 28 02:32:35.849381 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 02:32:36.331847 sshd[5253]: pam_unix(sshd:session): session closed for user core Jan 28 02:32:36.339199 systemd[1]: sshd@14-10.230.34.254:22-68.220.241.50:39002.service: Deactivated successfully. Jan 28 02:32:36.342172 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 02:32:36.344956 systemd-logind[1488]: Session 17 logged out. Waiting for processes to exit. Jan 28 02:32:36.347235 systemd-logind[1488]: Removed session 17. Jan 28 02:32:36.856199 containerd[1508]: time="2026-01-28T02:32:36.856122842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 02:32:37.173461 containerd[1508]: time="2026-01-28T02:32:37.173243075Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:37.175776 containerd[1508]: time="2026-01-28T02:32:37.175720749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 02:32:37.175914 containerd[1508]: time="2026-01-28T02:32:37.175834900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 02:32:37.176111 kubelet[2700]: E0128 02:32:37.176058 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:32:37.177006 kubelet[2700]: E0128 02:32:37.176124 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:32:37.177006 kubelet[2700]: E0128 02:32:37.176454 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66wbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vjdx_calico-system(7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:37.177255 containerd[1508]: time="2026-01-28T02:32:37.177179540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:32:37.504866 containerd[1508]: time="2026-01-28T02:32:37.504734016Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:37.506440 containerd[1508]: time="2026-01-28T02:32:37.506305628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:32:37.506440 containerd[1508]: time="2026-01-28T02:32:37.506389318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 02:32:37.506615 kubelet[2700]: E0128 02:32:37.506526 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:32:37.506615 kubelet[2700]: E0128 02:32:37.506575 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:32:37.506920 kubelet[2700]: E0128 02:32:37.506823 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8zps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bvmzd_calico-system(345054d8-51ec-4ec2-90c7-329ebe97ba46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:37.507907 containerd[1508]: time="2026-01-28T02:32:37.507504229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 02:32:37.508760 kubelet[2700]: E0128 02:32:37.508693 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:32:37.815072 containerd[1508]: time="2026-01-28T02:32:37.814922171Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:37.816781 containerd[1508]: time="2026-01-28T02:32:37.816603790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 02:32:37.816781 containerd[1508]: time="2026-01-28T02:32:37.816711896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 02:32:37.817351 kubelet[2700]: E0128 02:32:37.816979 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:32:37.817351 kubelet[2700]: E0128 02:32:37.817047 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:32:37.817351 kubelet[2700]: E0128 02:32:37.817213 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66wbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vjdx_calico-system(7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:37.818941 kubelet[2700]: E0128 02:32:37.818753 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:32:38.848475 containerd[1508]: time="2026-01-28T02:32:38.848386398Z" level=info msg="StopPodSandbox for \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\"" Jan 28 02:32:38.873379 kubelet[2700]: E0128 02:32:38.873258 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.010 [WARNING][5273] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0", GenerateName:"calico-apiserver-7866ff566b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a6be4a3-a931-4bdf-98fa-3be5929a5064", ResourceVersion:"1383", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7866ff566b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d", Pod:"calico-apiserver-7866ff566b-wrtzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64f27eb97e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.010 [INFO][5273] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.010 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" iface="eth0" netns="" Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.010 [INFO][5273] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.010 [INFO][5273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.069 [INFO][5282] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" HandleID="k8s-pod-network.c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.070 [INFO][5282] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.070 [INFO][5282] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.081 [WARNING][5282] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" HandleID="k8s-pod-network.c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.081 [INFO][5282] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" HandleID="k8s-pod-network.c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.083 [INFO][5282] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:39.090966 containerd[1508]: 2026-01-28 02:32:39.087 [INFO][5273] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:32:39.094058 containerd[1508]: time="2026-01-28T02:32:39.091112983Z" level=info msg="TearDown network for sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\" successfully" Jan 28 02:32:39.094058 containerd[1508]: time="2026-01-28T02:32:39.091231693Z" level=info msg="StopPodSandbox for \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\" returns successfully" Jan 28 02:32:39.175893 containerd[1508]: time="2026-01-28T02:32:39.175595676Z" level=info msg="RemovePodSandbox for \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\"" Jan 28 02:32:39.178213 containerd[1508]: time="2026-01-28T02:32:39.178168485Z" level=info msg="Forcibly stopping sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\"" Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.232 [WARNING][5297] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0", GenerateName:"calico-apiserver-7866ff566b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a6be4a3-a931-4bdf-98fa-3be5929a5064", ResourceVersion:"1383", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7866ff566b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"61243bc062067397d2fa2cab2744f06d70e86244353c02bd8cab6e5192c07d6d", Pod:"calico-apiserver-7866ff566b-wrtzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64f27eb97e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.233 [INFO][5297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.233 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" iface="eth0" netns="" Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.233 [INFO][5297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.233 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.266 [INFO][5304] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" HandleID="k8s-pod-network.c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.267 [INFO][5304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.267 [INFO][5304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.280 [WARNING][5304] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" HandleID="k8s-pod-network.c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.280 [INFO][5304] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" HandleID="k8s-pod-network.c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--wrtzl-eth0" Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.283 [INFO][5304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:39.287276 containerd[1508]: 2026-01-28 02:32:39.285 [INFO][5297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193" Jan 28 02:32:39.288042 containerd[1508]: time="2026-01-28T02:32:39.287307547Z" level=info msg="TearDown network for sandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\" successfully" Jan 28 02:32:39.298129 containerd[1508]: time="2026-01-28T02:32:39.298077418Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:32:39.298267 containerd[1508]: time="2026-01-28T02:32:39.298193671Z" level=info msg="RemovePodSandbox \"c674324d94f522f3db44ca41b92f5bdd5044f09080e6264e4574290bcc736193\" returns successfully" Jan 28 02:32:39.299177 containerd[1508]: time="2026-01-28T02:32:39.299044466Z" level=info msg="StopPodSandbox for \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\"" Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.363 [WARNING][5319] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"345054d8-51ec-4ec2-90c7-329ebe97ba46", ResourceVersion:"1372", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39", Pod:"goldmane-666569f655-bvmzd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali636de0fa910", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.365 [INFO][5319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.365 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" iface="eth0" netns="" Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.365 [INFO][5319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.365 [INFO][5319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.399 [INFO][5327] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" HandleID="k8s-pod-network.7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.399 [INFO][5327] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.399 [INFO][5327] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.408 [WARNING][5327] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" HandleID="k8s-pod-network.7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.408 [INFO][5327] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" HandleID="k8s-pod-network.7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.410 [INFO][5327] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:39.415240 containerd[1508]: 2026-01-28 02:32:39.412 [INFO][5319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:32:39.415240 containerd[1508]: time="2026-01-28T02:32:39.415040091Z" level=info msg="TearDown network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\" successfully" Jan 28 02:32:39.415240 containerd[1508]: time="2026-01-28T02:32:39.415080681Z" level=info msg="StopPodSandbox for \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\" returns successfully" Jan 28 02:32:39.416749 containerd[1508]: time="2026-01-28T02:32:39.416716222Z" level=info msg="RemovePodSandbox for \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\"" Jan 28 02:32:39.416856 containerd[1508]: time="2026-01-28T02:32:39.416759817Z" level=info msg="Forcibly stopping sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\"" Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.477 [WARNING][5342] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"345054d8-51ec-4ec2-90c7-329ebe97ba46", ResourceVersion:"1372", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"b4a5a913af32e80d38df28d6f202373f8e58f9a3782482e535e856eba00e6f39", Pod:"goldmane-666569f655-bvmzd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.123.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali636de0fa910", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.478 [INFO][5342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.478 [INFO][5342] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" iface="eth0" netns="" Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.478 [INFO][5342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.478 [INFO][5342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.513 [INFO][5349] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" HandleID="k8s-pod-network.7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.514 [INFO][5349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.514 [INFO][5349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.525 [WARNING][5349] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" HandleID="k8s-pod-network.7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.525 [INFO][5349] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" HandleID="k8s-pod-network.7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Workload="srv--hg60y.gb1.brightbox.com-k8s-goldmane--666569f655--bvmzd-eth0" Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.527 [INFO][5349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:39.531966 containerd[1508]: 2026-01-28 02:32:39.530 [INFO][5342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd" Jan 28 02:32:39.534255 containerd[1508]: time="2026-01-28T02:32:39.532962084Z" level=info msg="TearDown network for sandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\" successfully" Jan 28 02:32:39.575164 containerd[1508]: time="2026-01-28T02:32:39.574831315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:32:39.575164 containerd[1508]: time="2026-01-28T02:32:39.574916218Z" level=info msg="RemovePodSandbox \"7a67f9e6c26d30037e19b5f26f44f72472325583a64ec79d13b8fa4f3a42b3dd\" returns successfully" Jan 28 02:32:39.575164 containerd[1508]: time="2026-01-28T02:32:39.575738503Z" level=info msg="StopPodSandbox for \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\"" Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.627 [WARNING][5363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cb78a5cb-4de2-4536-925b-fdddfbef361f", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06", Pod:"coredns-668d6bf9bc-mqzws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bf3cacf9a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.627 [INFO][5363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.627 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" iface="eth0" netns="" Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.627 [INFO][5363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.627 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.659 [INFO][5370] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" HandleID="k8s-pod-network.e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.659 [INFO][5370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.659 [INFO][5370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.668 [WARNING][5370] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" HandleID="k8s-pod-network.e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.668 [INFO][5370] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" HandleID="k8s-pod-network.e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.670 [INFO][5370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:39.675134 containerd[1508]: 2026-01-28 02:32:39.672 [INFO][5363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:32:39.675134 containerd[1508]: time="2026-01-28T02:32:39.674815436Z" level=info msg="TearDown network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\" successfully" Jan 28 02:32:39.675134 containerd[1508]: time="2026-01-28T02:32:39.674854100Z" level=info msg="StopPodSandbox for \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\" returns successfully" Jan 28 02:32:39.676394 containerd[1508]: time="2026-01-28T02:32:39.676357474Z" level=info msg="RemovePodSandbox for \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\"" Jan 28 02:32:39.676483 containerd[1508]: time="2026-01-28T02:32:39.676403134Z" level=info msg="Forcibly stopping sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\"" Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.744 [WARNING][5384] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cb78a5cb-4de2-4536-925b-fdddfbef361f", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"62bd799643b903d369474d3308f2b4d096092c16aec0cb248eac0f335eefee06", Pod:"coredns-668d6bf9bc-mqzws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bf3cacf9a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.744 [INFO][5384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.744 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" iface="eth0" netns="" Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.744 [INFO][5384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.744 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.786 [INFO][5392] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" HandleID="k8s-pod-network.e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.786 [INFO][5392] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.787 [INFO][5392] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.800 [WARNING][5392] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" HandleID="k8s-pod-network.e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.800 [INFO][5392] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" HandleID="k8s-pod-network.e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--mqzws-eth0" Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.803 [INFO][5392] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:39.811400 containerd[1508]: 2026-01-28 02:32:39.806 [INFO][5384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1" Jan 28 02:32:39.811400 containerd[1508]: time="2026-01-28T02:32:39.809003327Z" level=info msg="TearDown network for sandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\" successfully" Jan 28 02:32:39.817736 containerd[1508]: time="2026-01-28T02:32:39.817531261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:32:39.817736 containerd[1508]: time="2026-01-28T02:32:39.817717412Z" level=info msg="RemovePodSandbox \"e4d52d6afb91eced5f8d0eb1b03a6297e11e31126adf4561442878e0fe7d22c1\" returns successfully" Jan 28 02:32:39.819063 containerd[1508]: time="2026-01-28T02:32:39.819018535Z" level=info msg="StopPodSandbox for \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\"" Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.879 [WARNING][5406] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bc7a2646-8a27-4b05-8c51-22c9804a41de", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e", Pod:"coredns-668d6bf9bc-b4mnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali984c4478d11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.880 [INFO][5406] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.880 [INFO][5406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" iface="eth0" netns="" Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.880 [INFO][5406] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.880 [INFO][5406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.914 [INFO][5413] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" HandleID="k8s-pod-network.066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.914 [INFO][5413] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.914 [INFO][5413] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.929 [WARNING][5413] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" HandleID="k8s-pod-network.066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.929 [INFO][5413] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" HandleID="k8s-pod-network.066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.932 [INFO][5413] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:39.936656 containerd[1508]: 2026-01-28 02:32:39.934 [INFO][5406] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:32:39.938041 containerd[1508]: time="2026-01-28T02:32:39.936759995Z" level=info msg="TearDown network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\" successfully" Jan 28 02:32:39.938041 containerd[1508]: time="2026-01-28T02:32:39.936823141Z" level=info msg="StopPodSandbox for \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\" returns successfully" Jan 28 02:32:39.938259 containerd[1508]: time="2026-01-28T02:32:39.938056132Z" level=info msg="RemovePodSandbox for \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\"" Jan 28 02:32:39.938259 containerd[1508]: time="2026-01-28T02:32:39.938100202Z" level=info msg="Forcibly stopping sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\"" Jan 28 02:32:39.988205 update_engine[1489]: I20260128 02:32:39.986439 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 02:32:39.989667 update_engine[1489]: I20260128 02:32:39.989358 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 02:32:39.990038 update_engine[1489]: I20260128 02:32:39.989993 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 02:32:39.990633 update_engine[1489]: E20260128 02:32:39.990561 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 02:32:39.990969 update_engine[1489]: I20260128 02:32:39.990911 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.040 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bc7a2646-8a27-4b05-8c51-22c9804a41de", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"91bab994dc3d1a173f308923fe919d6ced36c18b72561ed07d371cb689c81d0e", Pod:"coredns-668d6bf9bc-b4mnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.123.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali984c4478d11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.040 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.040 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" iface="eth0" netns="" Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.040 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.040 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.080 [INFO][5434] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" HandleID="k8s-pod-network.066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.081 [INFO][5434] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.081 [INFO][5434] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.092 [WARNING][5434] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" HandleID="k8s-pod-network.066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.093 [INFO][5434] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" HandleID="k8s-pod-network.066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Workload="srv--hg60y.gb1.brightbox.com-k8s-coredns--668d6bf9bc--b4mnx-eth0" Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.095 [INFO][5434] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:40.100198 containerd[1508]: 2026-01-28 02:32:40.097 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa" Jan 28 02:32:40.101973 containerd[1508]: time="2026-01-28T02:32:40.100971339Z" level=info msg="TearDown network for sandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\" successfully" Jan 28 02:32:40.107855 containerd[1508]: time="2026-01-28T02:32:40.107225717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:32:40.107855 containerd[1508]: time="2026-01-28T02:32:40.107374114Z" level=info msg="RemovePodSandbox \"066009045114788b573bfc22a20c52c9dc465aad57f07bdab35868d34c6b6cfa\" returns successfully" Jan 28 02:32:40.108388 containerd[1508]: time="2026-01-28T02:32:40.108056285Z" level=info msg="StopPodSandbox for \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\"" Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.159 [WARNING][5448] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0", GenerateName:"calico-apiserver-7866ff566b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd42b56d-5021-410e-8408-e15b3c52f065", ResourceVersion:"1307", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7866ff566b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26", Pod:"calico-apiserver-7866ff566b-tbgpj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif555d5a8604", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.159 [INFO][5448] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.159 [INFO][5448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" iface="eth0" netns="" Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.160 [INFO][5448] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.160 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.191 [INFO][5456] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" HandleID="k8s-pod-network.eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.192 [INFO][5456] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.192 [INFO][5456] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.203 [WARNING][5456] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" HandleID="k8s-pod-network.eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.203 [INFO][5456] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" HandleID="k8s-pod-network.eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.205 [INFO][5456] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:40.209992 containerd[1508]: 2026-01-28 02:32:40.207 [INFO][5448] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:32:40.209992 containerd[1508]: time="2026-01-28T02:32:40.209777449Z" level=info msg="TearDown network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\" successfully" Jan 28 02:32:40.209992 containerd[1508]: time="2026-01-28T02:32:40.209837802Z" level=info msg="StopPodSandbox for \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\" returns successfully" Jan 28 02:32:40.211127 containerd[1508]: time="2026-01-28T02:32:40.210693117Z" level=info msg="RemovePodSandbox for \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\"" Jan 28 02:32:40.211127 containerd[1508]: time="2026-01-28T02:32:40.210756198Z" level=info msg="Forcibly stopping sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\"" Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.267 [WARNING][5470] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0", GenerateName:"calico-apiserver-7866ff566b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd42b56d-5021-410e-8408-e15b3c52f065", ResourceVersion:"1307", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7866ff566b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"1a3a4f6b435343c3213ead560f25e5913a50fd2cf64db1b689e1944e92089f26", Pod:"calico-apiserver-7866ff566b-tbgpj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.123.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif555d5a8604", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.267 [INFO][5470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.267 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" iface="eth0" netns="" Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.267 [INFO][5470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.267 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.303 [INFO][5477] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" HandleID="k8s-pod-network.eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.304 [INFO][5477] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.304 [INFO][5477] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.314 [WARNING][5477] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" HandleID="k8s-pod-network.eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.314 [INFO][5477] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" HandleID="k8s-pod-network.eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--apiserver--7866ff566b--tbgpj-eth0" Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.316 [INFO][5477] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:40.321393 containerd[1508]: 2026-01-28 02:32:40.319 [INFO][5470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2" Jan 28 02:32:40.324232 containerd[1508]: time="2026-01-28T02:32:40.322511669Z" level=info msg="TearDown network for sandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\" successfully" Jan 28 02:32:40.339551 containerd[1508]: time="2026-01-28T02:32:40.339298381Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:32:40.339551 containerd[1508]: time="2026-01-28T02:32:40.339419596Z" level=info msg="RemovePodSandbox \"eb2ff0c829a2f248275fe64079888bc5cb638a39e83c8275c7f4e7a85ac896a2\" returns successfully" Jan 28 02:32:40.341795 containerd[1508]: time="2026-01-28T02:32:40.341025336Z" level=info msg="StopPodSandbox for \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\"" Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.411 [WARNING][5491] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5", ResourceVersion:"1370", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc", Pod:"csi-node-driver-9vjdx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbd8c3a1dc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.412 [INFO][5491] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.412 [INFO][5491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" iface="eth0" netns="" Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.412 [INFO][5491] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.412 [INFO][5491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.447 [INFO][5498] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" HandleID="k8s-pod-network.e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.447 [INFO][5498] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.447 [INFO][5498] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.459 [WARNING][5498] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" HandleID="k8s-pod-network.e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.459 [INFO][5498] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" HandleID="k8s-pod-network.e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.462 [INFO][5498] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:40.467353 containerd[1508]: 2026-01-28 02:32:40.464 [INFO][5491] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:32:40.468583 containerd[1508]: time="2026-01-28T02:32:40.468414898Z" level=info msg="TearDown network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\" successfully" Jan 28 02:32:40.468583 containerd[1508]: time="2026-01-28T02:32:40.468522373Z" level=info msg="StopPodSandbox for \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\" returns successfully" Jan 28 02:32:40.471013 containerd[1508]: time="2026-01-28T02:32:40.470805261Z" level=info msg="RemovePodSandbox for \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\"" Jan 28 02:32:40.471769 containerd[1508]: time="2026-01-28T02:32:40.471340489Z" level=info msg="Forcibly stopping sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\"" Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.545 [WARNING][5512] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5", ResourceVersion:"1370", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"dc9952c3493eddaa6f25bd44ec453c55323fd6e4d49d92dbd80681ae05002acc", Pod:"csi-node-driver-9vjdx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.123.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbd8c3a1dc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.546 [INFO][5512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.546 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" iface="eth0" netns="" Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.546 [INFO][5512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.546 [INFO][5512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.597 [INFO][5519] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" HandleID="k8s-pod-network.e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.597 [INFO][5519] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.598 [INFO][5519] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.609 [WARNING][5519] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" HandleID="k8s-pod-network.e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.610 [INFO][5519] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" HandleID="k8s-pod-network.e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Workload="srv--hg60y.gb1.brightbox.com-k8s-csi--node--driver--9vjdx-eth0" Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.613 [INFO][5519] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:40.619602 containerd[1508]: 2026-01-28 02:32:40.615 [INFO][5512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98" Jan 28 02:32:40.620636 containerd[1508]: time="2026-01-28T02:32:40.620522726Z" level=info msg="TearDown network for sandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\" successfully" Jan 28 02:32:40.627420 containerd[1508]: time="2026-01-28T02:32:40.627275474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:32:40.627420 containerd[1508]: time="2026-01-28T02:32:40.627365754Z" level=info msg="RemovePodSandbox \"e241d5c916adcaecc775bbfb39837ace9c7de952da920f5a165f195aadd60e98\" returns successfully" Jan 28 02:32:40.629861 containerd[1508]: time="2026-01-28T02:32:40.629774210Z" level=info msg="StopPodSandbox for \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\"" Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.740 [WARNING][5533] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0", GenerateName:"calico-kube-controllers-858bccccf6-", Namespace:"calico-system", SelfLink:"", UID:"19aa6a03-3b76-49c3-840d-da43872b111b", ResourceVersion:"1348", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"858bccccf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd", Pod:"calico-kube-controllers-858bccccf6-bqm86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic458fc7f0b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.740 [INFO][5533] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.740 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" iface="eth0" netns="" Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.740 [INFO][5533] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.740 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.784 [INFO][5540] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" HandleID="k8s-pod-network.dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.785 [INFO][5540] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.785 [INFO][5540] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.795 [WARNING][5540] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" HandleID="k8s-pod-network.dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.796 [INFO][5540] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" HandleID="k8s-pod-network.dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.799 [INFO][5540] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:40.804402 containerd[1508]: 2026-01-28 02:32:40.801 [INFO][5533] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:32:40.805940 containerd[1508]: time="2026-01-28T02:32:40.805878246Z" level=info msg="TearDown network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\" successfully" Jan 28 02:32:40.806072 containerd[1508]: time="2026-01-28T02:32:40.805936258Z" level=info msg="StopPodSandbox for \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\" returns successfully" Jan 28 02:32:40.808028 containerd[1508]: time="2026-01-28T02:32:40.807985919Z" level=info msg="RemovePodSandbox for \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\"" Jan 28 02:32:40.808108 containerd[1508]: time="2026-01-28T02:32:40.808036588Z" level=info msg="Forcibly stopping sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\"" Jan 28 02:32:40.869445 kubelet[2700]: E0128 02:32:40.868012 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd7f4d764-szmpb" podUID="9d9dfd5e-a429-4d13-9ec6-6f8b582ac456" Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.912 [WARNING][5555] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0", GenerateName:"calico-kube-controllers-858bccccf6-", Namespace:"calico-system", SelfLink:"", UID:"19aa6a03-3b76-49c3-840d-da43872b111b", ResourceVersion:"1348", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 31, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"858bccccf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hg60y.gb1.brightbox.com", ContainerID:"c5410e1d4067f7634e26a3aa0137d63655585159c3c5d87c7cce200db9ffa3dd", Pod:"calico-kube-controllers-858bccccf6-bqm86", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.123.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic458fc7f0b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.913 [INFO][5555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.913 [INFO][5555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" iface="eth0" netns="" Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.913 [INFO][5555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.913 [INFO][5555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.959 [INFO][5563] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" HandleID="k8s-pod-network.dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.961 [INFO][5563] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.961 [INFO][5563] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.975 [WARNING][5563] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" HandleID="k8s-pod-network.dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.975 [INFO][5563] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" HandleID="k8s-pod-network.dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Workload="srv--hg60y.gb1.brightbox.com-k8s-calico--kube--controllers--858bccccf6--bqm86-eth0" Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.978 [INFO][5563] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:40.984215 containerd[1508]: 2026-01-28 02:32:40.981 [INFO][5555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d" Jan 28 02:32:40.986359 containerd[1508]: time="2026-01-28T02:32:40.984366660Z" level=info msg="TearDown network for sandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\" successfully" Jan 28 02:32:40.990348 containerd[1508]: time="2026-01-28T02:32:40.990268113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:32:40.990457 containerd[1508]: time="2026-01-28T02:32:40.990388345Z" level=info msg="RemovePodSandbox \"dcf67e7f832a166a9398e2f54907b132477d283d651d4b31ccaffbffd24ad60d\" returns successfully" Jan 28 02:32:40.991620 containerd[1508]: time="2026-01-28T02:32:40.991586870Z" level=info msg="StopPodSandbox for \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\"" Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.079 [WARNING][5578] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.079 [INFO][5578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.079 [INFO][5578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" iface="eth0" netns="" Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.079 [INFO][5578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.079 [INFO][5578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.121 [INFO][5585] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" HandleID="k8s-pod-network.b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.121 [INFO][5585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.121 [INFO][5585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.133 [WARNING][5585] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" HandleID="k8s-pod-network.b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.133 [INFO][5585] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" HandleID="k8s-pod-network.b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.136 [INFO][5585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:41.141392 containerd[1508]: 2026-01-28 02:32:41.138 [INFO][5578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:32:41.142221 containerd[1508]: time="2026-01-28T02:32:41.141425109Z" level=info msg="TearDown network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\" successfully" Jan 28 02:32:41.142221 containerd[1508]: time="2026-01-28T02:32:41.141592938Z" level=info msg="StopPodSandbox for \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\" returns successfully" Jan 28 02:32:41.143498 containerd[1508]: time="2026-01-28T02:32:41.142876309Z" level=info msg="RemovePodSandbox for \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\"" Jan 28 02:32:41.143498 containerd[1508]: time="2026-01-28T02:32:41.142950688Z" level=info msg="Forcibly stopping sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\"" Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.231 [WARNING][5599] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" WorkloadEndpoint="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.231 [INFO][5599] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.231 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" iface="eth0" netns="" Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.231 [INFO][5599] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.231 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.271 [INFO][5606] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" HandleID="k8s-pod-network.b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.271 [INFO][5606] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.271 [INFO][5606] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.282 [WARNING][5606] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" HandleID="k8s-pod-network.b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.282 [INFO][5606] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" HandleID="k8s-pod-network.b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Workload="srv--hg60y.gb1.brightbox.com-k8s-whisker--7f9cc9d84b--4zj2q-eth0" Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.284 [INFO][5606] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:32:41.289937 containerd[1508]: 2026-01-28 02:32:41.286 [INFO][5599] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296" Jan 28 02:32:41.292393 containerd[1508]: time="2026-01-28T02:32:41.290833271Z" level=info msg="TearDown network for sandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\" successfully" Jan 28 02:32:41.296182 containerd[1508]: time="2026-01-28T02:32:41.295877351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:32:41.296182 containerd[1508]: time="2026-01-28T02:32:41.295949241Z" level=info msg="RemovePodSandbox \"b73cbdc9662a7563b799b8f7540f77d696e110159f3e07b85d8bcc42c7a2d296\" returns successfully" Jan 28 02:32:41.443959 systemd[1]: Started sshd@15-10.230.34.254:22-68.220.241.50:39006.service - OpenSSH per-connection server daemon (68.220.241.50:39006). Jan 28 02:32:42.054203 sshd[5614]: Accepted publickey for core from 68.220.241.50 port 39006 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:32:42.055581 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:32:42.065219 systemd-logind[1488]: New session 18 of user core. Jan 28 02:32:42.074430 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 02:32:42.587781 sshd[5614]: pam_unix(sshd:session): session closed for user core Jan 28 02:32:42.594892 systemd-logind[1488]: Session 18 logged out. Waiting for processes to exit. Jan 28 02:32:42.596142 systemd[1]: sshd@15-10.230.34.254:22-68.220.241.50:39006.service: Deactivated successfully. Jan 28 02:32:42.599688 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 02:32:42.601843 systemd-logind[1488]: Removed session 18. Jan 28 02:32:42.860072 containerd[1508]: time="2026-01-28T02:32:42.859378979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:32:43.170569 containerd[1508]: time="2026-01-28T02:32:43.170085490Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:32:43.172501 containerd[1508]: time="2026-01-28T02:32:43.172295380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:32:43.172584 containerd[1508]: time="2026-01-28T02:32:43.172492096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:32:43.172926 kubelet[2700]: E0128 02:32:43.172799 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:32:43.172926 kubelet[2700]: E0128 02:32:43.172901 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:32:43.173877 kubelet[2700]: E0128 02:32:43.173743 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzjjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7866ff566b-tbgpj_calico-apiserver(cd42b56d-5021-410e-8408-e15b3c52f065): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:32:43.176169 kubelet[2700]: E0128 02:32:43.175023 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:32:47.700652 systemd[1]: Started sshd@16-10.230.34.254:22-68.220.241.50:44350.service - OpenSSH per-connection server daemon (68.220.241.50:44350). Jan 28 02:32:48.299690 sshd[5653]: Accepted publickey for core from 68.220.241.50 port 44350 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:32:48.302324 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:32:48.309218 systemd-logind[1488]: New session 19 of user core. Jan 28 02:32:48.319363 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 02:32:48.826363 sshd[5653]: pam_unix(sshd:session): session closed for user core Jan 28 02:32:48.833677 systemd[1]: sshd@16-10.230.34.254:22-68.220.241.50:44350.service: Deactivated successfully. Jan 28 02:32:48.837244 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 02:32:48.838387 systemd-logind[1488]: Session 19 logged out. Waiting for processes to exit. Jan 28 02:32:48.841317 systemd-logind[1488]: Removed session 19. Jan 28 02:32:48.859282 kubelet[2700]: E0128 02:32:48.858995 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:32:49.855327 kubelet[2700]: E0128 02:32:49.855261 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:32:49.985232 update_engine[1489]: I20260128 02:32:49.984555 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 02:32:49.986759 update_engine[1489]: I20260128 02:32:49.986263 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 02:32:49.986759 update_engine[1489]: I20260128 02:32:49.986691 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 02:32:49.993433 update_engine[1489]: E20260128 02:32:49.993279 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 02:32:49.993433 update_engine[1489]: I20260128 02:32:49.993391 1489 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 02:32:50.856041 kubelet[2700]: E0128 02:32:50.855892 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:32:53.855186 kubelet[2700]: E0128 02:32:53.855097 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:32:53.934595 systemd[1]: Started sshd@17-10.230.34.254:22-68.220.241.50:37116.service - OpenSSH per-connection server daemon (68.220.241.50:37116). Jan 28 02:32:54.518109 sshd[5668]: Accepted publickey for core from 68.220.241.50 port 37116 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:32:54.520397 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:32:54.527507 systemd-logind[1488]: New session 20 of user core. Jan 28 02:32:54.534397 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 02:32:55.036136 sshd[5668]: pam_unix(sshd:session): session closed for user core Jan 28 02:32:55.044690 systemd-logind[1488]: Session 20 logged out. Waiting for processes to exit. Jan 28 02:32:55.045598 systemd[1]: sshd@17-10.230.34.254:22-68.220.241.50:37116.service: Deactivated successfully. Jan 28 02:32:55.050114 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 02:32:55.053321 systemd-logind[1488]: Removed session 20. Jan 28 02:32:55.859411 kubelet[2700]: E0128 02:32:55.858544 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd7f4d764-szmpb" podUID="9d9dfd5e-a429-4d13-9ec6-6f8b582ac456" Jan 28 02:32:56.862534 kubelet[2700]: E0128 02:32:56.861253 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:32:59.989363 update_engine[1489]: I20260128 02:32:59.988029 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 02:32:59.989363 update_engine[1489]: I20260128 02:32:59.988880 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 02:32:59.990353 update_engine[1489]: I20260128 02:32:59.990262 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 02:32:59.990796 update_engine[1489]: E20260128 02:32:59.990747 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 02:32:59.990999 update_engine[1489]: I20260128 02:32:59.990967 1489 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 02:32:59.994674 update_engine[1489]: I20260128 02:32:59.994024 1489 omaha_request_action.cc:617] Omaha request response: Jan 28 02:32:59.994674 update_engine[1489]: E20260128 02:32:59.994277 1489 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 02:32:59.994674 update_engine[1489]: I20260128 02:32:59.994478 1489 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 02:32:59.994674 update_engine[1489]: I20260128 02:32:59.994500 1489 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 02:32:59.994674 update_engine[1489]: I20260128 02:32:59.994513 1489 update_attempter.cc:306] Processing Done. Jan 28 02:32:59.994674 update_engine[1489]: E20260128 02:32:59.994567 1489 update_attempter.cc:619] Update failed. Jan 28 02:32:59.994674 update_engine[1489]: I20260128 02:32:59.994590 1489 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 02:32:59.994674 update_engine[1489]: I20260128 02:32:59.994606 1489 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 02:32:59.994674 update_engine[1489]: I20260128 02:32:59.994618 1489 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 02:32:59.995256 update_engine[1489]: I20260128 02:32:59.994776 1489 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 02:32:59.995256 update_engine[1489]: I20260128 02:32:59.994844 1489 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 02:32:59.995256 update_engine[1489]: I20260128 02:32:59.994861 1489 omaha_request_action.cc:272] Request: Jan 28 02:32:59.995256 update_engine[1489]: Jan 28 02:32:59.995256 update_engine[1489]: Jan 28 02:32:59.995256 update_engine[1489]: Jan 28 02:32:59.995256 update_engine[1489]: Jan 28 02:32:59.995256 update_engine[1489]: Jan 28 02:32:59.995256 update_engine[1489]: Jan 28 02:32:59.995256 update_engine[1489]: I20260128 02:32:59.994874 1489 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 02:32:59.995256 update_engine[1489]: I20260128 02:32:59.995203 1489 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 02:32:59.995787 update_engine[1489]: I20260128 02:32:59.995478 1489 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 02:32:59.996345 update_engine[1489]: E20260128 02:32:59.996294 1489 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 02:32:59.996441 update_engine[1489]: I20260128 02:32:59.996371 1489 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 02:32:59.996441 update_engine[1489]: I20260128 02:32:59.996404 1489 omaha_request_action.cc:617] Omaha request response: Jan 28 02:32:59.996441 update_engine[1489]: I20260128 02:32:59.996420 1489 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 02:32:59.996441 update_engine[1489]: I20260128 02:32:59.996433 1489 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 02:32:59.996633 update_engine[1489]: I20260128 02:32:59.996444 1489 update_attempter.cc:306] Processing Done. Jan 28 02:32:59.996633 update_engine[1489]: I20260128 02:32:59.996456 1489 update_attempter.cc:310] Error event sent. Jan 28 02:32:59.997336 locksmithd[1529]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 02:32:59.998502 update_engine[1489]: I20260128 02:32:59.998433 1489 update_check_scheduler.cc:74] Next update check in 42m52s Jan 28 02:32:59.999025 locksmithd[1529]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 02:33:00.157663 systemd[1]: Started sshd@18-10.230.34.254:22-68.220.241.50:37120.service - OpenSSH per-connection server daemon (68.220.241.50:37120). Jan 28 02:33:00.776650 sshd[5683]: Accepted publickey for core from 68.220.241.50 port 37120 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:33:00.779056 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:33:00.787115 systemd-logind[1488]: New session 21 of user core. Jan 28 02:33:00.794393 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 02:33:01.310724 sshd[5683]: pam_unix(sshd:session): session closed for user core Jan 28 02:33:01.316861 systemd[1]: sshd@18-10.230.34.254:22-68.220.241.50:37120.service: Deactivated successfully. Jan 28 02:33:01.319931 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 02:33:01.320977 systemd-logind[1488]: Session 21 logged out. Waiting for processes to exit. Jan 28 02:33:01.323618 systemd-logind[1488]: Removed session 21. Jan 28 02:33:01.415660 systemd[1]: Started sshd@19-10.230.34.254:22-68.220.241.50:37124.service - OpenSSH per-connection server daemon (68.220.241.50:37124). Jan 28 02:33:01.855425 kubelet[2700]: E0128 02:33:01.855304 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:33:02.017829 sshd[5696]: Accepted publickey for core from 68.220.241.50 port 37124 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:33:02.020454 sshd[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:33:02.028244 systemd-logind[1488]: New session 22 of user core. Jan 28 02:33:02.033396 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 02:33:02.856885 kubelet[2700]: E0128 02:33:02.856730 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:33:02.940957 sshd[5696]: pam_unix(sshd:session): session closed for user core Jan 28 02:33:02.950354 systemd[1]: sshd@19-10.230.34.254:22-68.220.241.50:37124.service: Deactivated successfully. Jan 28 02:33:02.953670 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 02:33:02.956314 systemd-logind[1488]: Session 22 logged out. Waiting for processes to exit. Jan 28 02:33:02.958616 systemd-logind[1488]: Removed session 22. Jan 28 02:33:03.041648 systemd[1]: Started sshd@20-10.230.34.254:22-68.220.241.50:53340.service - OpenSSH per-connection server daemon (68.220.241.50:53340). Jan 28 02:33:03.682428 sshd[5707]: Accepted publickey for core from 68.220.241.50 port 53340 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:33:03.684775 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:33:03.692375 systemd-logind[1488]: New session 23 of user core. Jan 28 02:33:03.698004 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 02:33:03.855795 kubelet[2700]: E0128 02:33:03.855598 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:33:04.982367 sshd[5707]: pam_unix(sshd:session): session closed for user core Jan 28 02:33:04.989600 systemd[1]: sshd@20-10.230.34.254:22-68.220.241.50:53340.service: Deactivated successfully. Jan 28 02:33:04.993419 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 02:33:04.996473 systemd-logind[1488]: Session 23 logged out. Waiting for processes to exit. Jan 28 02:33:04.998501 systemd-logind[1488]: Removed session 23. Jan 28 02:33:05.092936 systemd[1]: Started sshd@21-10.230.34.254:22-68.220.241.50:53352.service - OpenSSH per-connection server daemon (68.220.241.50:53352). Jan 28 02:33:05.689094 sshd[5725]: Accepted publickey for core from 68.220.241.50 port 53352 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:33:05.692376 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:33:05.701234 systemd-logind[1488]: New session 24 of user core. Jan 28 02:33:05.707584 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 02:33:05.854832 kubelet[2700]: E0128 02:33:05.854761 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:33:06.648562 sshd[5725]: pam_unix(sshd:session): session closed for user core Jan 28 02:33:06.659981 systemd[1]: sshd@21-10.230.34.254:22-68.220.241.50:53352.service: Deactivated successfully. Jan 28 02:33:06.662673 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 02:33:06.663813 systemd-logind[1488]: Session 24 logged out. Waiting for processes to exit. Jan 28 02:33:06.668440 systemd-logind[1488]: Removed session 24. Jan 28 02:33:06.752552 systemd[1]: Started sshd@22-10.230.34.254:22-68.220.241.50:53362.service - OpenSSH per-connection server daemon (68.220.241.50:53362). Jan 28 02:33:07.361713 sshd[5736]: Accepted publickey for core from 68.220.241.50 port 53362 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:33:07.363873 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:33:07.370340 systemd-logind[1488]: New session 25 of user core. Jan 28 02:33:07.377373 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 02:33:07.855991 kubelet[2700]: E0128 02:33:07.855889 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd7f4d764-szmpb" podUID="9d9dfd5e-a429-4d13-9ec6-6f8b582ac456" Jan 28 02:33:07.861485 sshd[5736]: pam_unix(sshd:session): session closed for user core Jan 28 02:33:07.867524 systemd[1]: sshd@22-10.230.34.254:22-68.220.241.50:53362.service: Deactivated successfully. Jan 28 02:33:07.872344 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 02:33:07.874233 systemd-logind[1488]: Session 25 logged out. Waiting for processes to exit. Jan 28 02:33:07.877713 systemd-logind[1488]: Removed session 25. Jan 28 02:33:08.856640 kubelet[2700]: E0128 02:33:08.855683 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:33:12.871926 kubelet[2700]: E0128 02:33:12.871694 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:33:13.005649 systemd[1]: Started sshd@23-10.230.34.254:22-68.220.241.50:44926.service - OpenSSH per-connection server daemon (68.220.241.50:44926). Jan 28 02:33:13.628656 sshd[5754]: Accepted publickey for core from 68.220.241.50 port 44926 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:33:13.631062 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:33:13.639190 systemd-logind[1488]: New session 26 of user core. Jan 28 02:33:13.643428 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 02:33:14.149805 sshd[5754]: pam_unix(sshd:session): session closed for user core Jan 28 02:33:14.160304 systemd[1]: sshd@23-10.230.34.254:22-68.220.241.50:44926.service: Deactivated successfully. Jan 28 02:33:14.172554 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 02:33:14.181316 systemd-logind[1488]: Session 26 logged out. Waiting for processes to exit. Jan 28 02:33:14.185080 systemd-logind[1488]: Removed session 26. Jan 28 02:33:15.857994 kubelet[2700]: E0128 02:33:15.857898 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vjdx" podUID="7c66daa0-da57-4a7e-a3d9-e335fd8bbbe5" Jan 28 02:33:18.859397 containerd[1508]: time="2026-01-28T02:33:18.859129684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:33:19.231535 containerd[1508]: time="2026-01-28T02:33:19.230633261Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:33:19.233902 containerd[1508]: time="2026-01-28T02:33:19.233529635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:33:19.233902 containerd[1508]: time="2026-01-28T02:33:19.233769664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:33:19.234512 kubelet[2700]: E0128 02:33:19.234341 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:33:19.234512 kubelet[2700]: E0128 02:33:19.234487 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:33:19.238276 containerd[1508]: time="2026-01-28T02:33:19.235187333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:33:19.238898 kubelet[2700]: E0128 02:33:19.238595 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5szp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7866ff566b-wrtzl_calico-apiserver(0a6be4a3-a931-4bdf-98fa-3be5929a5064): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:33:19.242235 kubelet[2700]: E0128 02:33:19.242123 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-wrtzl" podUID="0a6be4a3-a931-4bdf-98fa-3be5929a5064" Jan 28 02:33:19.258567 systemd[1]: Started sshd@24-10.230.34.254:22-68.220.241.50:44942.service - OpenSSH per-connection server daemon (68.220.241.50:44942). Jan 28 02:33:19.576439 containerd[1508]: time="2026-01-28T02:33:19.575744015Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:33:19.580191 containerd[1508]: time="2026-01-28T02:33:19.580108144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:33:19.580558 containerd[1508]: time="2026-01-28T02:33:19.580357258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 02:33:19.581059 kubelet[2700]: E0128 02:33:19.580912 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:33:19.581059 kubelet[2700]: E0128 02:33:19.581005 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:33:19.581851 kubelet[2700]: E0128 02:33:19.581376 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8zps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bvmzd_calico-system(345054d8-51ec-4ec2-90c7-329ebe97ba46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:33:19.583452 kubelet[2700]: E0128 02:33:19.583362 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bvmzd" podUID="345054d8-51ec-4ec2-90c7-329ebe97ba46" Jan 28 02:33:19.883976 sshd[5790]: Accepted publickey for core from 68.220.241.50 port 44942 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:33:19.887754 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:33:19.900720 systemd-logind[1488]: New session 27 of user core. Jan 28 02:33:19.909556 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 02:33:20.896796 sshd[5790]: pam_unix(sshd:session): session closed for user core Jan 28 02:33:20.904637 systemd[1]: sshd@24-10.230.34.254:22-68.220.241.50:44942.service: Deactivated successfully. Jan 28 02:33:20.909783 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 02:33:20.912732 systemd-logind[1488]: Session 27 logged out. Waiting for processes to exit. Jan 28 02:33:20.916037 systemd-logind[1488]: Removed session 27. Jan 28 02:33:21.901228 kubelet[2700]: E0128 02:33:21.899832 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7866ff566b-tbgpj" podUID="cd42b56d-5021-410e-8408-e15b3c52f065" Jan 28 02:33:22.857429 containerd[1508]: time="2026-01-28T02:33:22.857289353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:33:23.209427 containerd[1508]: time="2026-01-28T02:33:23.208514241Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:33:23.211530 containerd[1508]: time="2026-01-28T02:33:23.211455981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:33:23.211842 containerd[1508]: time="2026-01-28T02:33:23.211530232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 02:33:23.212528 kubelet[2700]: E0128 02:33:23.212380 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:33:23.213310 kubelet[2700]: E0128 02:33:23.212585 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:33:23.213310 kubelet[2700]: E0128 02:33:23.212905 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:50e6dbe8f85b451e9c2d8f88eee475c8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd7f4d764-szmpb_calico-system(9d9dfd5e-a429-4d13-9ec6-6f8b582ac456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:33:23.216679 containerd[1508]: time="2026-01-28T02:33:23.216640011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:33:23.533660 containerd[1508]: time="2026-01-28T02:33:23.533386778Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:33:23.535751 containerd[1508]: time="2026-01-28T02:33:23.534724579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:33:23.535751 containerd[1508]: time="2026-01-28T02:33:23.534943696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 02:33:23.537105 kubelet[2700]: E0128 02:33:23.536140 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:33:23.537105 kubelet[2700]: E0128 02:33:23.536265 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:33:23.537105 kubelet[2700]: E0128 02:33:23.536517 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59xvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd7f4d764-szmpb_calico-system(9d9dfd5e-a429-4d13-9ec6-6f8b582ac456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:33:23.538304 kubelet[2700]: E0128 02:33:23.538237 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd7f4d764-szmpb" podUID="9d9dfd5e-a429-4d13-9ec6-6f8b582ac456" Jan 28 02:33:23.857731 containerd[1508]: time="2026-01-28T02:33:23.856961947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 02:33:24.193350 containerd[1508]: time="2026-01-28T02:33:24.193178826Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:33:24.194966 containerd[1508]: time="2026-01-28T02:33:24.194811409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 02:33:24.194966 containerd[1508]: time="2026-01-28T02:33:24.194890272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 02:33:24.195163 kubelet[2700]: E0128 02:33:24.195100 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:33:24.195258 kubelet[2700]: E0128 02:33:24.195182 2700 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:33:24.197166 kubelet[2700]: E0128 02:33:24.195390 2700 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5mfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-858bccccf6-bqm86_calico-system(19aa6a03-3b76-49c3-840d-da43872b111b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 02:33:24.197166 kubelet[2700]: E0128 02:33:24.196993 2700 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-858bccccf6-bqm86" podUID="19aa6a03-3b76-49c3-840d-da43872b111b" Jan 28 02:33:26.023618 systemd[1]: Started sshd@25-10.230.34.254:22-68.220.241.50:39754.service - OpenSSH per-connection server daemon (68.220.241.50:39754). Jan 28 02:33:26.658350 sshd[5829]: Accepted publickey for core from 68.220.241.50 port 39754 ssh2: RSA SHA256:MvmOTWWAmuPnalM1kfFCrpm8gYLqtBE5J+5wFgq8rWc Jan 28 02:33:26.661277 sshd[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:33:26.670300 systemd-logind[1488]: New session 28 of user core. Jan 28 02:33:26.679365 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 02:33:27.368172 sshd[5829]: pam_unix(sshd:session): session closed for user core Jan 28 02:33:27.376975 systemd-logind[1488]: Session 28 logged out. Waiting for processes to exit. Jan 28 02:33:27.377761 systemd[1]: sshd@25-10.230.34.254:22-68.220.241.50:39754.service: Deactivated successfully. Jan 28 02:33:27.381638 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 02:33:27.383979 systemd-logind[1488]: Removed session 28.